UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Pratt's importance measures in factor analysis : a new technique for interpreting oblique factor models 2008

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2008_fall_wu_amery_dai_ling.pdf
ubc_2008_fall_wu_amery_dai_ling.pdf
ubc_2008_fall_wu_amery_dai_ling.pdf [ 5.92MB ]
ubc_2008_fall_wu_amery_dai_ling.pdf
Metadata
JSON: 1.0054580.json
JSON-LD: 1.0054580+ld.json
RDF/XML (Pretty): 1.0054580.xml
RDF/JSON: 1.0054580+rdf.json
Turtle: 1.0054580+rdf-turtle.txt
N-Triples: 1.0054580+rdf-ntriples.txt
Citation
1.0054580.ris

Full Text

PRATT'S IMPORTANCE MEASURES IN FACTOR ANALYSIS: A NEW TECHNIQUE FOR INTERPRETING OBLIQUE FACTOR MODELS by AMERY DAI LING WU B.A., Soochow University, 1991 M.A., Middlesex University, 1993 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Measurement, Evaluation, and Research Methodology) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) May 2008 © Amery Dai Ling Wu, 2008 Abstract This dissertation introduces a new method, Pratt's measure matrix, for interpreting multidimensional oblique factor models in both exploratory and confirmatory contexts. Overall, my thesis, supported by empirical evidence, refutes the currently recommended and practiced methods for understanding an oblique factor model; that is, interpreting the pattern matrix or structure matrix alone or juxtaposing both without integrating the information. Chapter Two reviews the complexities of interpreting a multidimensional factor solution due to factor correlation (i.e., obliquity). Three major complexities highlighted are (1) the inconsistency between the pattern and structure coefficients, (2) the distortion of additive properties, and (3) the inappropriateness of the traditional cut-off rules as being "meaningful". Chapter Three provides the theoretical rationale for adapting Pratt's importance measures from their use in multiple regression to that of factor analysis. The new method is demonstrated and tested with both continuous and categorical data in exploratory factor analysis. The results show that Pratt's measures are applicable to factor analysis and are able to resolve three interpretational complexities arising from factor obliquity. In the context of confirmatory factor analysis, Chapter Four warns researchers that a structure coefficient could be entirely spurious due to factor obliquity as well as zero constraint on its corresponding pattern coefficient. Interpreting such structure coefficients as Graham et al. (2003) suggested can be problematic. The mathematically more justified method is to transform the pattern and structure coefficients into Pratt's measures. The last chapter describes eight novel contributions in this dissertation. The new method is the first attempt ever at ordering the importance of latent variables for multivariate data. It is also the first attempt at demonstrating and explicating the existence, mechanism, and implications of the suppression effect in factor analyses. Specifically, the new method resolves the three interpretational problems due to factor obliquity, assists in identifying a better-fitting exploratory ii factor model, proves that a structure coefficient in a confirmatory factor analysis with a zero pattern constraint is entirely spurious, avoids the debate over the choice of oblique and orthogonal factor rotation, and last but not least, provides a tool for consolidating the role of factors as the underlying causes. iii Table of Contents Abstract   ^ ii List of Tables. Lists of Symbols and Abbreviations ^ vi Co-authorship Statement ^ viii Chapter One: Brief Background for Factor Analysis 1 1.1 What is factor analysis and why factor analyze?  1 References  ^ 5 Chapter Two: What Has the Literature Recommended for Interpreting Factor Models? ^7 2.1 Recommendations for Interpreting a Unidimensional Factor Models ^7 2.2 Complexities in Interpreting a Multidimensional Factor Model  13 2.3 Review of Practices and Recommendations for Interpreting Multidimensional Factor Models ^ 34 References   41 Chapter Three: Pratt's Importance Measures in Exploratory Factor Analysis^ 46 3.1 The Use of Pratt's Importance Measures in Linear Multiple Regression  46 3.2 The Rationale for Applying Pratt's Importance Measures to Factor Analysis   ^ 51 3.3 A Demonstration of Pratt's Measures in EFA for Continuous Data ^54 3.4 A Demonstration of Pratt's Measures in EFA for Categorical Data 62 References ^ 69 Chapter Four: Demonstration of Pratt's Measures in Confirmatory Factor Analysis ^71 4.1 Pratt's Measures in CFA with No Factorial Complexities^ 72 4.2 Pratt's Measures in CFA with Factorial Complexity .76 4.3 Comparing the Fit of the Pratt's Measures Model: Additional CFA Case Studies ^ 81 References ^ 86 Chapter Five: Contribution, Limitation, and Future Research^ 87 5.1 Recapitulation 87 5.2 Novel Contributions  ^ ..88 5.3 Caveats and Limitations^ 92 5.4 Suggestions for Future Research  ^95 References  ^ 100 Appendices  ^ 101 iv List of Tables Chapter Two Table 2.1 Review of Recommendation and Practice for Interpreting a Factor Solution ^ 11 Table 2.2 Vertical and Horizontal Additive Properties of the Orthogonal Factor Model ^ 19 Table 2.3 Distortion of the Horizontal Additive Property in the Pattern Coefficients 26 Table 2.4 Distortion of the Horizontal Additive Property in the Structure Coefficients^29 Table 2.5 Inappropriateness of Traditional Rules for Interpreting the Pattern Matrix  33 Chapter Three Table 3.1 Correlation Matrix of the Five Explanatory Variables for TIMSS Mathematics Achievement ^ 50 Table 3.2 Pratt's Measures for the Five Explanatory Variables for TIMSS Mathematics Achievement 51 Table 3.3 Pattern, Structure, PatternxStructure, & Pratt's Measure Matrices, and Communalities for Holzinger & Swineford's (1939) Psychological Ability Data^ 56 Table 3.4 Eigenvalues and Parallel Analysis for 2003 TIMSS Outside School Activities Data ^ .64 Table 3.5 Pattern, Structure, Pattern x Structure, & Pratt's Measure Matrices, and Communalities for 2003 TIMSS Outside School Activities Data^ 65 Chapter Four Table 4.1 Factor Solution for Case One: with No Factorial Complexities ^73 Table 4.2 Factor Solution for Case Two: with One Factorial Complexity .77 Table 4.3 Comparisons of CFA Fit Indices of Models Identified by Different EFA Coefficients and Cut-offs^ 84 v Lists of Symbols and Abbreviations Symbols (by alphabetic order) English Letters %(F): the percentage of total variance explained by a given factor Cov(F 1, F2): the covariance between factor one and factor two dp: the Pratt's measures for the explanatory variable/factor p D: the Pratt's measure matrix Fgp : the factor p for response variable q 112 : the communality L: the loading matrix p: the number of the explanatory variables/factors P: the pattern matrix PS: a matrix in which the elements are the products of a given pattern coefficient and its corresponding structure coefficient q: the number of the observed response variables r: Pearson correlation rm : the average correlation in the observed correlation matrix. R: the correlation matrix among the factors S: the structure matrix Sqrt(PS): the square root of PS Uq : the error term for the response variable q Var(Y): the variance of predicted response variable Y Var(Fi): the variance of factor one Var(F2): the variance of factor two Xqp : the explanatory variable p for the response variable q Yq : the observed score of the response variable q, Yq : the fitted (predicted) score of the response variable q Greek Letters : the standardized partial regression coefficients for the explanatory variable p 13qp : the standardized partial regression weight (i.e., loadings or pattern coefficients) of the explanatory variable/factor p on the response variable q Pp : the simple correlations between the explanatory variable p and the response variable EFD: the sum of the elements in D for a given test across the four factors ETD: is the sum of elements in D for a given factor across the 24 tests EFL2 : the sum of the squared loadings for a given test across the factors. ETL2 : the sum of the squared loadings for a given factor along the tests. EFP2 : the sum of the squared pattern coefficients for a given test across the factors ETP2 : the sum of the squared pattern coefficients for a given factor along the tests EFPS: the sum of the elements in PS for a given test across the four factors ETPS: the sum of the elements in PS for a given factor across the 24 tests IFS2 : the sum of the squared structure coefficients for a given test across the factors ETS2 : the sum of the squared structure coefficients for a given factor along the tests vi Abbreviations (by alphabetic order) AIC: Akaike's information criterion CAIC: corrected Akaike's information criterion CFA: confirmatory factor analysis EFA: exploratory factor analysis FA: factor analysis MLM: multivariate linear model MV: multivariate PA: parallel analysis SEM: structure equation modeling vii Co-Authorship Statement Chapter Two will be revised into a manuscript co-authored with Dr. Bruno D. Zumbo and Dr. Anita Hubley at the University of British Columbia. As the first author, I was the in charge of all aspects of this research project including identification of the research questions, literature reviews, syntheses, critiques, and conclusions. I will also be in charge of the writing of the manuscript. Both co-authors contributed to the literature review and will assist in the preparation and revision of the manuscript. Chapter Three and Four will be revised into two manuscripts co-authored with Dr. Bruno D. Zumbo at the University of British Columbia and Dr. Roland D. Thomas at Carleton University. As the first author, I was in charge of all aspects of these two projects including formulating research questions, literature review, research design, data analyses. I will also be in charge of the writing of the manuscript. Both co-authors contributed to the identification and design of the research projects and will assist in the preparation and revision of this manuscript. viii Chapter One: Brief Background for Factor Analysis 1.1 What is factor analysis and why factor analyze? The major purpose of factor analysis is to identify a parsimonious number of common factors from a larger set of observed variables so that people can have a more concise conceptualization of the observed variables. There are two major uses of factor analysis. First, it is used as a measurement validational tool for test s development and refinement. The major purpose is to interpret the underlying construct(s) 2 and investigate how well each item measures the construct(s). Second, it is used to uncover the governing dimensions underlying a set of psychological domains such as personality traits. Theoretically, the common factors are assumed to be the "hidden underlying causes" of the variation in the observed variables (Borsboom, Mellenbergh, & van Heerden, 2003; 2004; Burt, 1940; Hoyle & Duvall, 2004; Rummel, 1970; Spearman, 1904; Zumbo, 2007) 3 . Statistically, factor analysis extracts a set of latent variables to account for the covariances among the observed variables. In essence, the extracted factors, functioning like independent variables4 , partition the variance of an observed variable into two parts: the common variance (i.e., 1Throughout the dissertation, the terms "test" and "scale" are used interchangeably with the term "observed indicator" or "observed variable", which denote the dependent variables in a factor analysis. If one is concerned with psychological measures, for example, one may speak of scales, whereas in educational or certification settings one speaks of "tests". 2It is crucial to point out the distinction between a latent variable (i.e., factor) and a construct. As Zumbo (2007) reminds us, although it is often confused even in the technical measurement literature, the construct is not the same as the true score or latent variable, which, in practical settings, is not the same as the observed item or task score. The essential difference being that a latent variable is a statistical and mathematical variable created by the data analyst and statistical modeler for which respondents (or examinees) could receive a predicted score based on their item responses. A construct, on the other hand, is an abstract or theoretical entity that has meaning because of its relation to other abstract variables, and a theory of the concept being studied. In short, one cannot get an empirically realized score on a construct, as they can on a latent variable. Test validity then involves an inference from the item responses to the construct via the latent variable; please see Zumbo (2007) for more details. 3Historically, there are theorists who have strongly argued against the reification of factors as entities and interpreting them as causes underlying the data covariances (e.g., Gould, 1981). This dissertation does not intend to take part in this historical debate; rather, it recognizes the possible error of making ruthless causal interpretation, but does not refute factor analysis as a potentially useful tool for attempting causal interpretations, even in a weak form. 4 Throughout this dissertation, the term "independent variable" is used interchangeably with "explanatory variable" or "predictor". The term "dependent variable" is used interchangeably with "response variable". 1 communality) - the proportion of variance that is shared in common with the rest of the observed variables and the unique variance - the proportion of variance that is specific to an observed variable and arises from random variation, which is equal to one minus communality. A factor analysis can be written as a regression equation such that Yq = 13q1Fq1 + 13q2Fq2 + ^ + 13qpFqp + Uq,^ (1.1) or^iTq = 13q1Fq1 + Pq2Fq2 + ^ + 13qpFqp (1.2) where, Yq is the standardized score for the observed response variable q, Fqp is the score of factor p for the observed response variable q, 13 qp is the standardized partial regression weight of factor p on the observed response variable q, Uq is the unique term (i.e., residual) for the observed response variable q ‘74:1 is the predicted (fitted) score for the observed response variable q. Three major differences between equations (1.1) and a typical multiple regression are: (a) q multiple observed response variables (i.e., dependent variables), Y q, are regressed simultaneously, (b) the latent independent variables, Fqp , are the common factors (i.e., independent variables) that are created by accounting for the covariances among the q observed variables, and (c) the weights 13qp for these factors are typically termed factor loadings in a factor analysis. A factor solution is referred to as unidimensional if only one factor in equation (1.1) is considered sufficient to account for the covariances among the observed variables, or multidimensional if two or more factors are entailed. When a multidimensional solution is chosen, the factors can be hypothesized to be inter-correlated and referred to as an oblique solution, or uncorrelated and referred to as an orthogonal solution. In practice, the choice of obliquity or orthogonality is often dependent upon the theoretical and/or empirical grounds as 2 well as the ease of interpretation that comes with each solution. This issue will be discussed in detail in Chapter Two. Exploratory factor analysis (EFA) versus confirmatory factor analysis (CFA) is a common classification for a factor analysis (Joreskog, 1969). Conceptually, the distinction between EFA and CFA lies in whether the investigator has a firm expectation of the underlying factor structure based on theoretical and/or empirical grounds (Church & Burke, 1994; Floyd & Widaman, 1995; Henson & Roberts, 2006). CFA requires a priori model specification regarding four elements of the factor structure: (1) the number of factors, (2) the correlation among the factors, if the model is multidimensional, (3) the loadings of the factor(s) on the observed variables, and if necessary, (4) the correlations among the unique terms (Joreskog & Sorbom, 1999; Wu, Li, & Zumbo, 2007). EFA, in contrast, is used when the investigator has no clear hypothesis about the above four elements and aims to explore the unknown structure of the empirical data and the substantive meaning of the factors. In statistical terms, the CFA vs. EFA distinction lies in whether any restriction is placed on the parameters estimates. Thus, CFA and EFA are also distinguished as restricted versus unrestricted factor analysis (Ferrando & Lorenzo-Seva, 2000; Joreskog & Siirbom, 1999). CFA constrains a subset of the model parameters to some fixed values according to the investigator's hypothesis (typically zeros). It is "confirmatory" in the sense that a CFA rejects or retains the restricted model using formal hypothesis testing (Floyd & Widaman, 1995; Zumbo, Sireci, & Hambleton, 2003). In contrast, EFA does not constrain any model parameters and allows all the parameters to be freely estimated (except in the orthogonal case where the factor correlations are constrained to be zeros). EFA results sometimes are used as the empirical groundwork for specifying a CFA hypothesis when a known theory is absent to guide the parameter constraints. Without a doubt, the credibility of factor analytic results, whether it be EFA or CFA, requires a user's craftsmanship in making a series of critical judgements about how to choose a 3 statistically optimal and theoretically sensible model (Brown, 2006; Fabrigar, Wegener, MacCallum, & Strahan, 1999; Gorsuch, 1983; Russell, 2002). Among many others, interpretability is one of the key criteria for evaluating the credibility of a factor solution (Gorsuch, 1983; Nunnally, 1978). Confidence in a chosen factor solution is reserved if the results are mathematically or theoretically difficult to interpret. The overall purpose of this dissertation is to propose a new method for assisting in interpreting factor analyses, in particular, the oblique factor models. This dissertation has produced three manuscripts to be revised and submitted for publication. Thus, the format of this dissertation follows the guidelines of a manuscript-based dissertation required by the Faculty of Graduate Studies at the University of British Columbia. Chapter Two, the first manuscript, includes an introduction and a literature review that set the context for the other two manuscripts. It explicates the interpretational problems and complexities inherent in an oblique factor model. It also critiques the commonly recommended and often practiced methods for interpreting an oblique factor model. Chapter Three, the second manuscript, introduces the new method, the Pratt's measure matrix that resolves the problems and complexities in interpreting an oblique factor model described in Chapter Two. It provides the theoretical rationales and two real-data demonstrations to substantiate the use of Pratt's measures matrix in EFA. Through the use of Pratt's measures matrix, Chapter Four, the third manuscript, argues and demonstrates that the method suggested by Graham, Guthrie, and Thompson (2003) for interpreting an oblique CFA model can be problematic. The last Chapter summarizes the improvements brought by the new method to resolving the problems of interpreting the oblique factor models. Future research to improve the new method is also suggested. 4 References Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110, 203-219. Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review, 111, 1061-1071. Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: The Guilford Press. Burt, C. (1940). The factors of the mind. London: University of London Press. Church, J. T., & Burke, P. J. (1994). Exploratory and confirmatory tests of the big five and Tellegen's three- and four-dimensional models. Journal of Personality and Social Psychology, 66, 93-114. Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 3, 272- 299. Ferrando, P. J., & Lorenzo-Seva, U. (2000). Unrestricted versus restricted factor analysis of multidimensional test items: Some aspect of the problem and some suggestions. PsicolOgica, 21, 301-323. Floyd, F. J., & Widaman, K. F. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7, 286-99. Gould, S. J. (1981). The mismeasure of man. New York: W. W. Norton. Graham, J. M., Guthrie, A. C., & Thompson, B. (2003). Consequences of not interpreting structure coefficients in published CFA research: A reminder. Structural Equation Modeling, 10, 142-152. Henson, R. K., & Roberts, J. K. (2006). Use of exploratory factor analysis in published research: Common errors and some comment on improved practice. Educational and Psychological Measurement, 66, 393-416. Hoyle, R. H., & Duvall, J. L. (2004). Determining the number of factors in exploratory and confirmatory factor analysis. In D. Kaplan (Ed.), The SAGE handbook of quantitative methodology for social sciences (pp. 301-315). Thousand Oaks, CA: Sage. JOreskog, K. G. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34, 183-202. Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill. Rummel, R. J. (1970). Applied factor analysis. Evanston, IL: North-western University Press. 5 Russell, D. W. (2002). In search of underlying dimensions: The use (and abuse) of factor analysis in personality and social psychology bulletin. Personality and Social Psychology Bulletin, 28, 12, 1629-46. Spearman, C. (1904). General intelligence, objectively determined and measured. American Journal of Psychology, 15, 201-293. Wu, A. D., Li, Z., & Zumbo, B. D. (2007). Decoding the Meaning of Factorial Invariance and Updating the Practice of Multi-group Confirmatory Factor Analysis: A Demonstration With TIMSS Data. Practical Assessment Research & Evaluation, 12, (2). Available online: http://pareonline net/genpare.asp?wh=0&abt=12 Zumbo, B. D. (2007). Validity: Foundational issues and statistical methodology. In C. R. Rao & S. Sinharay (Eds.), Handbook of statistics, Vol. 26: Psychometrics, (pp. 45-79). Zumbo, B. D., Sireci, S. G., & Hambleton, R. K. (2003, April). Re-visiting exploratory methods for construct comparability and measurement invariance: Is there something to be gained from the ways of old? Paper presented at the Annual Meeting of the National Council for Measurement in Education (NCME), Chicago, Illinois. 6 Chapter Two: What Has the Literature Recommended for Interpreting Factor Solutions? This chapter reviews the commonly recommended strategies for interpreting factor solutions. In particular, it describes the difficulties in interpreting an oblique factor model. This chapter also critiques and explains why the current recommendation of juxtaposing both the pattern and structure coefficients to overcome the difficulties in interpreting an oblique factor model is insufficient and problematic'. 2.1 Review of Recommendations for Interpreting a Unidimensional Factor Solution In a factor analysis, the researchers are interested in making two distinct inferences based on the data. The first is a logical inference that involves assigning meaning to the factors and then making causal inferences about the factors (Cattell, 1952; Kim & Muller, 1978; Rummel, 1970). The second is a statistical inference that involves generalizing the first type of inference to the population based on a given sample (Kim & Mueller, 1978). The fundamental theory of factor analysis shown in equation (1.1) is, in essence, a directional model and implies, at least theoretically, that the factors are the "hidden underlying causes" of the variation in the observed variables (Borsboom et al., 2003; 2004; Burt, 1940; Hoyle & Duvall, 2004; Rummel, 1970; Spearman, 1904; Zumbo, 2007). In a path diagram, this directional relationship is represented by the arrows going from the factors to the observed variables. The regression weight in equation (1.1) indicates the change in the observed score per unit change in the factor score. The term loading matrix refers to such an array of regression coefficients of the p factors on the q observed variables. Unlike typical directional inferences, however, the causal interpretation is complicated by the fact that the meaning of the underlying I A version of this chapter will be submitted for publication. Wu, A. D., Zumbo, B. D., & Hubley A. Common interpretational problems with oblique factor models. 7 causes, factors, is unknown to the investigator. Factors are merely latent variables of mathematical creation, which do not automatically have substantive meaning. To make directional interpretation, the meaning of a factor must be fairly well-known to the researchers. Traditionally, the meaning of a factor is inferred by the common meaning of the observed variables that load "meaningfully" (or "saliently") on that factor, and accordingly, a heuristic label is assigned to the factor. In this dissertation, we refer to the task of assigning meaning to factors as vertical interpretation and the task of making causal inferences about factors as horizontal interpretation. These two interpretational perspectives will be discussed in detail later in this chapter. For both perspectives, the first interpretation challenge is to decide on what constitutes a "meaningful" loading. As with any statistics, factor loadings can be impressive by chance (Cudeck & O'Dell 1994; Gorsuch, 1983; Harman, 1976; Horn, 1967; Humphreys, Ilgen, McGrath, & Montanelli, 1969; Kim & Muller, 1978). One of the approaches to deciding on whether a given loading is meaningful is to use inferential statistics, that is, hypothesis testing whether the loadings are statistically significant from a particular value, typically zero (Archer & Jennrich, 1973; Cliff & Hamburger, 1967; Cudeck & O'Dell, 1994; Harman, 1976; Henryeson, 1950; Jennrich, 1973). The advantage of this approach is that it relies on formal hypothesis testing and allows the construction of confidence intervals for the loadings. The use of hypothesis testing is, however, limited by its technical difficulties. The estimation of the standard errors of the loadings is a complex function of the sample size, the number of factors and observed variables, the estimation method, the rotation procedure, and the correlation among the factors (Cliff & Hamburger, 1967; Cudeck & O'Dell, 1994; Gorsuch, 1983). Take sample size for instance, a given loading could be considered to be significantly different from zero with a large sample size but non-significantly different from zero with a small sample size. In other words, the limitation of using hypothesis testing as the criterion for 8 being "meaningful" is that it can detect a trivial departure from zero if the sample size is large or fail to detect a meaningful relationship when the sample size is small. In addition, investigators often are more interested in what constitutes "practically significant" rather than whether the loading is different from zero. As a response to the estimation difficulty and lack of practical usefulness of hypothesis testing, the literature has suggested several rules for the minimum cut-off, beyond which a loading is considered as salient enough to be practically meaningful (Cudeck & O'Dell, 1994; Gorsuch, 1983; Rummel, 1970). A salient loading is one that is sufficiently high to assume that a meaningful relationship exists between the observed variable and the factor (Gorsuch, 1983). Today, there are several commonly adopted rules among the practitioners. Harman (1967, p. 435) provided a table of standard errors for loadings with sample sizes ranging from 20 to 500 and rn, ranging from .10 to .75 where rn, is the average correlation in the observed correlation matrix. In defining "meaningful loading", Comrey and Lee (1992) used "variance explained" criteria for deciding the importance of factor loadings: loadings in excess of .71 (i.e., 0.71x0.71= 50% of common variance) are considered excellent, 0.63 (i.e., 40% of common variance) very good, 0.55 (i.e., 30% of common variance) good, 0.45 (i.e., 20% of common variance) fair, and 0.32 (i.e., 10% of common variance) poor. They concluded that only variables with loadings of 0.32 or greater should be interpreted. A caveat for any cut-offs is that the use of the same lower bound for all loadings in a factor solution may be problematic. That is, it ignores the fact that for a given sample, different loading estimates have different sampling variability, not to mention sample-to-sample variation (Cudeck & O'Dell 1994). For this reason, based on an empirical review, Gorsuch (1983) suggested that a minimum loading of 0.3 for an orthogonal solution is sufficient to be considered significantly different from zero for a sample size of 175, but 0.4 for a sample size of 100. Considering the sampling variation, Gorsuch (1983) also provided a rough guide for salient 9 loadings by doubling the Pearson correlation required for statistical significance given the sample size. For example, for a sample size of 100, a factor loading of 0.4 is the necessary minimum for being considered as salient because a Pearson correlation of 0.2 is the necessary minimum for being considered as statistically significant for the same sample size. Despite these rules, the choice of minimum cut-off is often left to the discretion of the researchers, and the only consensus is "the greater the loading is, the better." Table 2.1 summarizes a sample of 40 scholarly entries on the method of factor analysis from the inception of oblique factor analysis (1940s) to date. It includes 20 textbooks that devoted the whole book specifically to the topic of factor analysis (including one monograph). It also includes 11 textbooks that devoted at least one chapter to the method of factor analysis; among them, nine are multivariate statistics textbooks, one is a psychometrics textbook, and one is a latent variable modeling textbook. All textbooks were borrowed from the libraries of the University of British Columbia and Simon Fraser University that were available at the time of the search. The other nine entries are journal articles that provided reviews or guidelines on the use of factor analysis. Journal articles were retrieved from the EBSCO database by simultaneously searching the key words "factor analysis" and "review". The first column of Table 2.1 lists the author's name and the publication year. The second column specifies the type of scholarly work. The third column lists the minimum cut-off value practiced or reported by the authors. The last three columns list other interpretational practices, which Section 2.3 of this chapter will address. From column three, one can see that among the 40 entries, 13 did not use any cut-off for loading interpretation, the minimum cut-off reported by the other 27 works ranges from 0.10 to 0.50 with value of 0.3 or 0.4 being most popular. 10 Table 2.1 Review of Recommendation and Practice for Interpreting a Factor Solution (continued on next page) Author Type of literature Minimum cut-off accepted Differentiated P & S? P or S interpreted? Juxtaposed P & S? Burt (1940) FA textbook No Yes Pattern Thurstone (1947) FA textbook No Yes Pattern Cattell (1952) FA textbook 0.1 or 0.13 Yes Pattern Fruchter (1954) FA textbook 0.3 Yes Pattern Horst (1965) FA textbook No No Orthogonal Harman (1967) FA textbook 0.3 or 0.4 Yes Both Yes Rummel (1970) FA textbook 0.5 Yes Both Yes Guertin & Bailey (1970) FA textbook 0.3 o r 0.4 Yes Both Lawley & Maxwell (1971) FA textbook No No Pattern Mulaik (1972) FA textbook No Yes Both Kim & Muller (1978) FA monograph 0.3 Yes Both Yes Nunnally (1978) Psychometrics textbook No Yes Both Yes Cattell (1978) FA textbook 0.3 or 0.4 No Both Press (1982) MV stats textbook 0.168 No Pattern Gorsuch (1983) FA textbook 0.3 or 0.4 Yes Both Yes Cureton & d'Agostino (1983) FA textbook 0.2 Yes Pattern Yes McDonald (1985) FA textbook 0.29 Yes Pattern Ford et al. (1986) Review/guideline article 0.15 No No indication Yates (1987) MV stats textbook 0.23 Yes Both Yes Comrey & Lee (1992) FA textbook 0.3 Yes Both Floyd & Widaman (1995) Review/guideline article 0.3 or 0.4 No No indication Kline (1993) FA textbook 0.3 Yes Structure Table 2.1 Review of Recommendation and Practice for Interpreting a Factor Solution (Continued) Author Type of literature Minimum Differentiated P or S Juxtaposed cut-off accepted P & S? interpreted? P & S? Ferguson & Cox (1993) Review/guideline article 0.4 No Pattern Basilevsky (1994) FA textbook 0.2 or 0.3 Yes Both Stevens (1996) MV stats textbook 0.4 Yes Orthogonal Thompson (1997) Journal guideline article No Yes Both Yes Loehlin (1998) Latent variable textbook No Yes Both Yes Fabrigar et al. (1999) Review/guideline article 0.3 or 0.4 No Pattern Cudeck (2000) MV stats book chapter 0.3 Yes Both Yes Russell (2002) Review/guideline article No Yes Both Yes Timm (2002) MV stats textbook No Yes Orthogonal Johnson & Wichern (2002) MV stats textbook 0.416 No Orthogonal Preacher & MacCallum (2003) Review/guideline article No No No indication Pett et al. (2003) FA textbook 0.4 Yes Both Conway & Huffcutt (2003) Review/guideline article No No No indication Giri (2004) MV stats textbook No No No indication Brown (2006) FA textbook 0.3 or .4 Yes No indication Hair et al. (2006) MV stats textbook 0.3 No Pattern Henson & Roberts (2006) Review/guideline article 0.4 Yes Orthogonal Tabachnik & Fidell (2007) MV stats textbook 0.3 Yes Pattern P: pattern matrix, S: structure matrix. MV: multivariate, FA: Factor analysis. The list is temporally ordered. 2.2 Complexities in Interpreting a Multidimensional Factor Model When a factor solution is multidimensional, the approach described in 2.1 for interpreting factor loadings is further complicated, especially when the factors are allowed to correlate. This section explicates two broad complexities under multidimensionality. The first complexity involves the vertical vs. horizontal interpretation and factorial complexity, and the second involves factor rotation. In particular, the discussion about factor rotation details three interpretational problems that may arise from an oblique factor rotation. Complexity One: Vertical vs. Horizontal Interpretation and Factorial Complexity A unidimensional factor model contains a column of loadings of the single factor on the q observed variables. In contrast, a multidimensional factor model contains a loading matrix L for the p multiple factors on the q observed variables. To properly understand the multiple-factor multiple-variable structure, it entails two perspectives in interpreting a L (Burt, 1940; Cattell, 1952; 1957; Gorsuch, 1983; Rummel, 1970), which we shall term as the vertical interpretation and horizontal interpretation. The vertical interpretation reads along the q observed variables for a given factor at a time. The nature of the vertical interpretation is descriptive and classificatory (Cattell, 1952; 1957; Gorsuch, 1983; Rummel, 1970). The major purpose is to (1) uncover the factor structure by summarizing and categorizing the complex interrelationships in the data and (2) understand the substantive meaning of the factors. The vertical interpretation is most germane when the task of the factor analysis is to (1) assign labels that best characterize the substantive meaning of the factors or (2) to form subscales among the set of observed items. In contrast, the horizontal interpretation reads across the p factors one observed variable at a time. Horizontal interpretation considers factors as the underlying causes that explain the variation in the observed variables as implied by the term "factor" used conventionally in the 13 experimental research design (Burt, 1947; Cattell, 1952; 1957; Gorsuch, 1983; Hoyle & Duvall, 2004; Rummel, 1970). Factor analysis is believed to delineate the causal nexus. The causal approach to factor interpretation is to "impute substantive form to the underlying and unknowns" (Rummel, 1970, p. 476). This interpretation perspective resonates with Spearman's (1904) original work wherein he referred to factors as 'the hidden underlying causes'. As Hoyle & Duvall (2004) stated, [...] the sets of observed variables believed to be caused, at least in part, by one or more factors. The patterns of association are conveyed in matrices of covariances or correlation, and to the extent that the associations among the observed variables are near zero when the influence of the factors is taken into account. (p. 301) Many of the early applications of factor analysis intended to identify these 'underlying factors'. These historical endeavours are summarized in Burt's (1966) work. The horizontal interpretation is particularly useful for test validation purposes where the researcher examines whether the hypothesized constructs have indeed caused the variation in people's item responses (Borsboom, 2003; 2004; Zumbo, 2007). Namely, the horizontal interpretation examines whether an item measures what it purports to measure (Borsboom, 2003; 2004; Kelley, 1927; Nunnally, 1978; Zumbo, 2007). In order to make meaningful causal statements about factors, the substantive meaning of the factors should, at the very least, be fairly well-known to the researchers. Thus, the horizontal interpretation is most appropriate when the vertical interpretation has been demonstrated to be intelligible and meaningful by the present or previous data (Burt, 1947; Gorsuch, 1983). For this reason, the vertical interpretation should, in principle, precede the horizontal interpretation for a given data set, unless the meaning of the factors is already designated and verified in previous studies. 14 When interpreting horizontally, an observed variable may yield salient loadings on more than one factor -- a condition referred to as factorial complexity (Kim & Muller, 1978), cross loading (Gorsuch, 1983; Rummel, 1970) or cooperative factors (Cattell, 1952; 1978). That is, a given observed variable is not a pure measure of one factor but also of the others. Factorial complexity complicates the interpretation because researchers have to determine which factors should be recognized as more important causes for the variation in an observed variable. Factorial complexity can also make vertical interpretation less straightforward because an item may be involved in defining the meaning of multiple factors or be categorized into multiple subscales. Similarly, the task of identifying a "meaningful factor structure" from L is no longer as clear-cut as is in the no cross-loading scenario (Church & Burke, 1994; Kim & Muller, 1978). In order to minimize the interpretational difficulty due to factorial complexity, the initial orientation of the factors is often rotated to target the simple structure proposed by Thurstone (1947). Thurstone's simple structure targets a solution where each factor is defined by a non- overlapping subset of observed variables that load higher relative to the rest of the measured variables and each observed variable loads on as few factors as possible (Browne, 2001; Cattell, 1952; Gorsuch, 1983; Kim & Muller, 1978; Nunnally, 1978) 2 . An ideal realization of the simple structure is a loading matrix with no factorial complexities where each observed variable loads only on one factor with zero loadings on the others. Although factor rotation was developed to simplify factorial complexities in a multidimensional factor solution, in itself, it inevitably creates other complexities along the way. 2In the original Thurstone (1947) book, he proposed five specific criteria for the simple structure. 15 Complexity Two: Factor Rotation A number of rotation procedures have been developed to capture the simple structure (see reviews in Browne, 2001; Gorsuch, 1983). These rotation techniques are often classified into two types: orthogonal or oblique. Orthogonal rotation constrains the angles among the factors to be at 90 degrees in the multidimensional space, hence yielding uncorrelated factors. Today, the most dominant method for the orthogonal rotation is probably the VARIMAX procedure (Kaiser, 1958). When the factors are orthogonal, interpretation of the loadings is inherently straightforward and simple because the loadings represent both the unique effects of the factors on the observed variables and the unique bivariate zero-order correlations between the factors and the observed variables. Furthermore, the interpretation of the loading matrix is simplified by two mathematical properties (Burt, 1940; Fruchter, 1954; Nunnally, 1978),which we shall refer to as the horizontal and vertical additive properties. The horizontal additive property affirms that the observed variance explained by a given factor is equal to the squared loading of that factor, hence, the communality of a given observed variable is equal to the sum of the squared loadings across the p factors (Gorsuch, 1983; Harman, 1976; Kim & Muller, 1978; Kline, 1993; Nunnally, 1978). Under orthogonality, calculation of the communality of an observed variable is a simple exercise of adding the squared loadings horizontally. The unique variance of an observed variable, therefore, is equal to one minus the sum of the squared loadings. For example, if an item loads on factors Fl and F2 with loadings of 0.7 and 0.2 respectively, Fl would contribute 0.72 = 0.49 (49 %) of the variance, and F2 would contribute 0.22 = 0.04 (4%) of the variance of that item. The communality would be equal to 0.49 + 0.04 = .53, and the unique variance would be equal to 1- 0.53 = 0.47. Because of this additive property of calculating the communality, the contribution of a given factor to the observed variance of an 16 item is readily attributable to its squared loading-- 49% to Fl and 4% to F2. Also, the contribution of a given factor to the standardized communality (i.e., transforming the communality into 1 by dividing itself) is readily attributable to the ratio of the squared loading to the communality. Using the same example, Fl contributes 0.49/0.53 = 92.5% of the standardized communality, and F2 contributes 0.04/0.53 = 7.5% of the standardized communality. The ease of orthogonal interpretation due to the horizontal additive property is analogous to that of a multiple regression when the independent variables are uncorrelated. The independent variables' contribution is directly attributable to each independent variable by the squares of the standardized partial regression coefficients. The R-squared value is the sum of the squared standardized partial regression coefficients across all the independent variables. Such interpretation simplicity of orthogonality transplants to factor analysis naturally because factors are, in essence, the independent variables for the observed variables. The vertical additive property affirms that the amount of total variance explained by a given factor is equal to the sum of the squared loadings along the q observed variables (Guertin & Bailey, 1970; Kline, 1993; Nunnally, 1978). The total variance is the sum of the standardized variances of all the observed variables, which is equal to q (one for each of the q observed variable). For example, suppose two factors are extracted for a dataset with six observed variables, which load on Fl with values of 0.3 0.4, 0.6, 0.7, 0.5, and 0.2. The amount of total variance, 6, that is accounted for by Fl will be equal to (0.3 )2 + (0.4)2 + (0.6)2 + (0.7) 2 + (0.5)2 + (0.2)2 =1.39 . The percentage of the total variance explained by Fl would be equal to 23.17% (1.39/6). In addition, the amount of total variance explained jointly by the bi-factor model can be simply calculated by summing the amount of total variance explained by Fl and F2. The vertical and horizontal additive properties inherent in orthogonality are illustrated with real data taken from Holzinger and Swineford (1939). This data consist of a variety of 24 17 psychological ability tests of junior high school students. These classic data have been used throughout the history of factor analysis and are one of the most widely studied in the literature (See Appendix A for the names of the 24 tests). The data have been shown to consist of four dimensions by various experts in factor analysis (e.g., Browne, 2001; Gorsuch, 1983; Harman, 1976; Preacher & MacCallum, 2003; Tucker & Lewis, 1973). The four-factor model explained 11.032 (46%) of the standardized total variance (i.e., 24). Column 2 of Table 2.2 shows the communalities for the 24 tests denoted as h 2 , columns 3 to 6 show the orthogonal loading matrix denoted as L, columns 7 to 10 show the corresponding squared values of the loading matrix denoted as L 2 , which is the amount of variance in the observed variables explained by each of the four factors. The last column shows the sum of the squared loadings across the four factors denoted as EFL2 . Because of the horizontal additive property, the values of EFL2 for each of the 24 tests are all equal to their corresponding communalities. In addition, the second last row of columns 7 to 11 shows the sums of squared loadings along the 24 tests for each of the four factors denoted as Z TL2 . Because of the vertical additive property, these values correspond to the amount of total variance that is explained by each of the four factors. The last row of column 7 to 11 shows the percentage of total variance explained by each of the factors denoted as %(F). For example, factor one accounted for 4.08 of the total variance, which is 4.08/24 = 17% of the total variance. The percentage of total variance jointly explained by the four factors adds up to 46%. 18 Table 2.2 Vertical and Horizontal Additive Properties of the Orthogonal Factor Model CoI.# 2 3 4 5 6 7 8^9 10 11 h 2 L L2 ZFL2 F1 F2 F3 F4 F1 F2^F3 F4 T1 .47 .24 .61 .14 .15 .06 .37^.02 .02 .47 T2 .28 .09 .52 -.03 .04 .01 .27^.00 .00 .28 T3 .22 .15 .43 .10 -.01 .02 .19^.01 .00 .22 T4 .43 .00 .62 .13 .18 .00 .39^.02 .03 .43 T5 .70 .81 .13 .15 .01 .66 .02^.02 .00 .70 T6 .68 .79 .17 .11 .15 .62 .03^.01 .02 .68 T7 .77 .87 .12 .08 .03 .75 .01^.01 .00 .77 T8 .56 .69 .22 .12 .13 .48 .05^.02 .02 .56 T9 .72 .80 .24 .09 .11 .65 .06^.01 .01 .72 T10 .64 .08 -.10 .78 .14 .01 .01^.60 .02 .64 T11 .45 .26 .13 .55 .26 .07 .02^.30 .07 .45 T12 .45 .05 .19 .64 .05 .00 .04^.41 .00 .45 T13 .41 .11 .36 .51 .09 .01 .13^.26 .01 .41 T14 .45 .14 .03 .01 .65 .02 .00^.00 .43 .45 T15 .34 -.05 .13 .03 .57 .00 .02^.00 .32 .34 T16 .42 .13 .38 .11 .50 .02 .14^.01 .25 .42 T17 .38 .04 -.03 .30 .54 .00 .00^.09 .29 .38 118 .26 .09 .14 .21 .43 .01 .02^.05 .18 .26 119 .23 .22 .20 .11 .36 .05 .04^.01 .13 .23 T20 .38 .30 .46 .03 .27 .09 .22^.00 .07 .38 T21 .42 .25 .41 .38 .20 .06 .17^.14 .04 .42 T22 .44 .46 .44 .10 .17 .21 .19^.01 .03 .44 T23 .53 .37 .55 .21 .21 .14 .30^.04 .05 .53 T24 .42 .40 .21 .37 .28 .16 .05^.14 .08 .42 ETL2 4.08 2.71 2.18 2.06 11.03 %(F) 17.00 11.30 9.10 8.60 46.00 Note. h z denotes the communality of a given test. EFL z is the sum of the squared loadings for a given test across the four factors. ITL2 is the sum of the squared loadings for a given factor along the 24 tests. %(F) denotes the percentage of total variance explained by a given factor. Orthogonal rotation, however, has its limitations. First, the constraint on zero correlation among the factors appears unrealistic because there is considerable theoretical and empirical 19 support for believing that most constructs in social and behavioural science are correlated. This belief is based on the supposition that factors possessed by an individual will typically be correlated. For example, while reading ability and mathematical ability are distinct attributes, they are certainly correlated to some degree. Also, factors are rarely uncorrelated in real data collected from day-to-day research. Second, because an orthogonal rotation forces the factors to be oriented at right angle to one another, there is less flexibility in approaching the simple structure. Third, if there were true associations among the factors, information on these relationships would be omitted by the orthogonality constraint and may lead to biased loading estimates. In spite of its limitations, orthogonal rotation has been the preference of many factor analysis users largely because of its ease of interpretation due to the additive properties (Browne, 2001; Conway & Huffcutt, 2003; Fabrigar et al., 1999; Harman, 1967; Henson & Roberts, 2006; Horst, 1965; Kieffer, 1998; Nunnally, 1978; Preacher & MacCallum, 2003; Rummel, 1970) and also because it is the default device of many popular statistical packages (Browne, 2001; Conway & Huffcutt, 2003; Fabrigar et al., 1999; Preacher & MacCallum, 2003). In fact, one can see the preference for orthogonal rotation even in cases where it is a priori tested and suggested that the factors are correlated. In this case, there is clearly a mismatch between the orthogonal factor solution and the data at hand Many methodologists have been warning against the thoughtless or unjustified use of orthogonal rotation and have been advocating the primary use of oblique rotation (Church & Burke, 1994; Cudeck, 2000; Fabrigar et al. 1999; Floyd & Widaman, 1995; Henson & Roberts, 2006; Preacher & MacCallum, 2003; Thurstone, 1947). Oblique rotation allows the orientation of the factors to be less or greater than 90 degrees, hence, correlated factors. Today, several equally popular oblique procedures are available such as the Direct QUARTIMIN (Jennrich & Sampson, 1966), and the PROMAX (Hendrickson & 20 White, 1964). There are several advantages in applying oblique rotation procedures. Below, we summarize some suggested in the literature: ■ An oblique rotation is more likely to approximate the simple structure by allowing flexibility in the orientation of factors (Browne, 2001; Cudeck, 2000; Gorsuch, 1983; Thurstone, 1947). ■ Although the purpose of factor analysis is to identify distinctive factors underlying a set of observed variables, many constructs in social and behavioural science may be better characterized as a continuum of distinction rather than as independent entities (Church & Burke, 1994; Floyd & Widaman, 1995; Preacher & MacCallum, 2003) ■ Orthogonality is a proposition to evaluate, not a fact to believe. An oblique rotation allows the factor correlations to be evaluated; if the factors are virtually orthogonal, the oblique rotation will return with, in essence, an orthogonal solution. In such a case, orthogonal constraint could follow subsequently for parsimony (Henson & Roberts, 2006). ■ Factor correlations reveal the psychological relationships of the underlying traits and provide information for identifying second-order factors (Cattell, 1978). ■ Technically, the orthogonality constraint can create biased loading estimates and a problem with under-identification in confirmatory factor analysis (Floyd & Widaman, 1995; Kenny & Kashy, 1992). In promoting the use of oblique rotation, Cattell (1978) stated, The reason begins with the fact that we should not expect influences [factors] in a common universe to remain mutually uninfluenced and uncorrelated. To this we can add an unquestionable statistical argument, namely, that if factors are by some rules uncorrelated in the total population they would nevertheless be correlated (oblique) in the sample just as any correlation that is zero in the population has a non-zero value in any sample. (p. 128) 21 Three Interpretational Complexities Due to Factor Obliquity Although allowing inter-correlations among factors is theoretically and empirically more justifiable, factor obliquity generates more interpretational complexities. Factor correlations not only render more difficulties in estimating the standard errors of the loadings for conducting hypothesis testing (Archer & Jennrich, 1973; Jennrich, 1973), they may also invalidate the cut- off criteria that are commonly and habitually applied to infer "meaningful loadings". Three major interpretational complexities with these traditional cut-off criteria may occur when the factors are correlated. 1. Inconsistency between P and S An oblique factor model yields two distinctive types of parameters of interpretational interest: the pattern coefficients and the structure coefficients. The pattern coefficients are the standardized partial regression weights assigned to each of the factors to yield the prediction of the observed scores (i.e., loadings). They reflect the unique and directional effect-- i.e., the change in the observed score per change in the factor score, by taking into account the overlapping relationships among the factors. The structure coefficients are the zero-order correlations between the observed variables and the factors indicating the bi-directional relationship. The structure coefficients are analogous to the bivariate Pearson correlations without isolating the overlapping relationships among the factors. The matrix of pattern coefficients is often denoted as P, of the structure coefficients as S, and of the factor correlations as R. Both P and S are of size qxp and R is of size pxp. The relationship between P, S, and R is given as, Sqxp= Pqxp Rpxp ^ (2.1) Equation (2.1) indicates that the structure matrix is equal to the pattern matrix post-multiplied by the factor correlation matrix. A detailed account of the meanings and relationships among P, S, 22 and R can be easily found in the factor analysis literature (e.g., Gorsuch, 1983; Harman, 1967; Kim & Muller, 1978; Thompson & Daniel, 1996; Rummel, 1970). Unfortunately, the interpretational ease borne with additive properties of orthogonality does not automatically transplant to an oblique solution. From equation (2.1), one can see that P will be equal to S only when R is an identity matrix with elements in the off-diagonals all equal to zero indicating no correlations among the factors. When the factors are correlated, there are always some discrepancies between P and S depending on the magnitude of correlations among the factors and their relationships with the observed variables. The generic and convenient term "loading" can no longer be indistinguishably used because it cannot synonymously designate the pattern and structure coefficients (Courville & Thompson, 2001; Henson, 2002; Henson & Roberts, 2006; Kim & Muller, 1978; Thompson & Borrello, 1985). The pattern coefficient, as in the orthogonal case, still reflects the unique directional effect because it isolates the effect of other factors in the model. However, the structure coefficient no longer reflects the unique bi-directional relationship between an observed variable and a factor. This is because structure coefficients are zero-order correlations that carry overlapping relationships rendered by the inter-correlations among the factors. The structure coefficient will always be an overestimate of the unique association between a factor and a variable, say Y1 and a given factor Fl, because the correlation between Y1 and Fl may be partly or solely due to both Y1 and F1 being related to F2. Once F2 is removed, the correlation between Fl and Y1 may decrease or diminish. Gorsuch (1983) stated, The structure coefficients do not reflect the independent contribution because the correlation between a variable and a factor is a function not only of its distinctive 23 variance but also of all the variance of the factors that overlaps with the other factors. (p. 207) Because pattern coefficients and structure coefficients can yield rather different values, they can lead to rather inconsistent interpretations about a factor solution. 2. Distortion of Additive Properties Unlike orthogonal loadings that exhibit both additive properties, pattern coefficients exhibits only the vertical additive property. More adversely, the structure coefficient exhibits neither additive property (Nunnally, 1978; Rummel, 1970). When factors are correlated, the horizontal additive property is distorted. The squared pattern coefficient no longer reflects the amount of variance explained by a given factor and the sum of the squared pattern coefficients across the p factors is no longer equal to the communality. The distortion can be explained by the variance of the predicted score 1‘7, which itself is a linear combination of factor scores as in equation (1.1). Taking a bi-factor model for instance, 'C' is a linear combination such that "C(= [31F1 + r32F 2 , where Oland 132 are the pattern coefficients for F1 and F2. The variance of the linear combination (i.e., communality) denoted as Var(Y) or equivalently Var(131F1 + (32F2) is given as, = (3 1 2Var(Fi) + r322Var(F2) + 2 pi132cov(Fi, F2)^(2.2) pi2 + p22 + 2p 1 p2R=^ (2.3) Because the variance of a factor score is often scaled to be one, equation (2.2) can be simplified to (2.3), where R is the Pearson correlation between Fl and F2. Assuming pi and 132 are of the same sign, calculating the communality by summing the squared pattern coefficients as the addition of the first two terms of (2.3) will yield an underestimate if the two factors are positively correlated (i.e., R is positive) because it fails to add the value of 243 1 02r or an overestimate if the two factors are negatively related because it fails to subtract the value of 24 2131132r. The above example is a simple case of bi-factor model. Evidently, the calculation of the communalities of a model with multiple oblique factors is even more convoluted, and the distortion of the horizontal additive property is further worsened. Thus, when the factors are correlated, the squared pattern coefficients do not actually partition the communality and their cross-factor sum does not give the communality despite the fact that the pattern coefficients do take into account the inter-correlations among the factors. Note that if the two factors are orthogonal, the communality will be equal to p 2 + 1322 because the third term in (2.3) drops out of the equation as a result of the fact that R = 0. This is why, when the factors are orthogonal, the communality is simply the sum of the squared loadings, which synonymously designate the pattern and structure coefficient (Gorsuch, 1983; Harman, 1976; Kim & Muller, 1978; Mulaik, 1972; Nunnally, 1978). Taking the 24 tests used for the orthogonal model for example, Table 2.3 demonstrates the distorted horizontal additive property of the pattern coefficients due to factor obliquity. Column 2 shows the communalities for the 24 tests denoted as h2 , which remains the same as those of the orthogonal model in Table 2.2. Columns 3 to 6 show the pattern matrix P, columns 7 to 10 show the corresponding squared pattern matrix denoted as P2 . Under obliquity, P2 no longer represents the amount of observed variance explained by each of the four factors. The last column, denoted as EFP 2, shows the sum of the squared pattern coefficients across the four factors. Because of the distortion of the horizontal additive property, these values are no longer equal to their corresponding communalities. Elements in EFP2 are either an under-representation or an over-representation of their corresponding communalities, depending on the correlations among the factors as well as between the factors and the tests. 25 Table 2.3 Distortion of the Horizontal Additive Property in the Pattern Coefficients Col.# 2 3 4 5 6 7 8 9 10 11 h 2 P P2 IFP2 F1 F2 F3 F4 F1 F2 F3 F4 T1 .47 .06 .65 .02 -.02 .00 .43 .00 .00 .43 T2 .28 -.05 .62 -.12 -.08 .00 .38 .02 .01 .41 T3 .22 .03 .49 .04 -.15 .00 .24 .00 .02 .27 T4 .43 -.23 .73 .03 .03 .05 .53 .00 .00 .58 T5 .70 .90 -.07 .04 -.12 .80 .01 .00 .02 .82 T6 .68 .84 -.05 -.03 .05 .70 .00 .00 .00 .71 T7 .77 .97 -.10 -.04 -.08 .94 .01 .00 .01 .96 T8 .56 .71 .05 .00 .02 .51 .00 .00 .00 .51 T9 .72 .85 .05 -.06 -.01 .72 .00 .00 .00 .72 T10 .64 -.02 -.28 .88 .02 .00 .08 .77 .00 .85 T11 .45 .15 -.04 .54 .14 .02 .00 .29 .02 .33 T12 .45 -.10 .14 .70 -.14 .01 .02 .49 .02 .54 T13 .41 -.06 .34 .51 -.10 .00 .12 .26 .01 .39 T14 .45 .07 -.15 -.15 .76 .01 .02 .02 .58 .62 T15 .34 -.18 .05 -.09 .65 .03 .00 .01 .42 .46 T16 .42 -.04 .31 -.04 .47 .00 .10 .00 .22 .32 T17 .38 -.06 -.21 .25 .58 .00 .04 .06 .33 .44 T18 .26 -.02 .03 .13 .43 .00 .00 .02 .18 .20 T19 .23 .14 .10 .00 .33 .02 .01 .00 .11 .14 T20 .38 .18 .44 -.12 .18 .03 .19 .02 .03 .27 T21 .42 .09 .36 .31 .04 .01 .13 .10 .00 .24 T22 .44 .37 .38 -.04 .04 .14 .14 .00 .00 .28 T23 .53 .21 .52 .07 .05 .05 .27 .01 .00 .32 T24 .42 .31 .05 .29 .17 .10 .00 .09 .03 .22 xTp2 4.15 2.73 2.14 2.01 11.03 %(F) 17.30 11.40 8.90 8.40 46.00 Note. h 2 denotes the communality of a given test. EFP l is the sum of the squared pattern coefficients for a given test across the four factors. ETP2 is the sum of he squared pattern coefficients for a given factor across the 24 tests. %(F) denotes the percentage of total variance of the 24 tests explained by a given factor. The difficulty of making the horizontal interpretation due to obliquity has been discussed in the context of multiple regression through the notion of variable importance. After finalizing a regression model, researchers are often interested in finding out which independent variable is relatively more important (i.e., practically significant). The widely practiced method for assessing the contribution of each independent variable is to order the absolute size of the 26 standardized partial regression coefficients (i.e., beta weights). This is because the beta weights are believed to overcome the incomparability problem of the unstandardized regression coefficients (i.e., b-weights), which reflect the different metrics of the independent variables (Achen, 1982; Greenland, Schlesselman, & Criqui, 1986; Healy, 1990). However, concerns have been raised about using the beta weights as importance measures (e.g., Bring, 1994; Healy, 1990; Thomas, Zhu, & Decady, 2007). The major concern stems from the following argument: For a regression model with p independent variables, a beta weight reflects the unique effect of each independent variable over and above the effects of all (p-1) other variables. Nonetheless, the reference subsets for the independent variable A and independent variable B are different, hence making the comparison invalid (Bring, 1994; Healy, 1990; Thomas et al, 2007). As is evident from the connection between factor analysis and multiple regression, the concern of using beta-weights as importance measures in multiple regression translates to using the pattern coefficients to assess the importance of factors. Note that the vertical additive property of the pattern coefficient, however, is not distorted by the factor obliquity. The second last row of columns 7 to 10, denoted as ETP 2 in Table 2.3, shows the sums of the squared pattern coefficients along the 24 tests for each factor. Because of the vertical additive property, these values still correspond to the amount of total variance explained by each factor and their sum is equal to the total variance (11.03). The last row of column 7 to 10, denoted as %(F), shows the proportion of total variance explained by each factor; the values add up to 46%, which is identical to that of the orthogonal solution. The reason that the vertical additive property was not distorted by factor obliquity is that for a factor model with q observed variables, the beta-weight reflects the unique effect of each factor over and above the effects of the identical (p-1) factors. Thus, the reference subsets for the observed variable Ti, T2,...,T24 are the same, thus rendering pattern coefficients comparable 27 along the observed variables. Namely, despite the distortion of the horizontal additive property, the pattern coefficients are still warranted for assigning meanings to the factors, grouping items into subscales, or uncovering the underlying structure of the data. When the factors are oblique, neither the horizontal nor the vertical additive property holds for the structure coefficients. The structure coefficients lose both additive properties simply because it fails to account for the overlapping relationship among the factors. The structure coefficient will always overestimate the true bi-directional relationship between a factor and an observed variable or the unique effect of a factor on an observed variable. Consequently, the cross-factor sum of squared structure coefficients will always be an inflation of the communality. Likewise, the along-variable sum of squared structure coefficients will always be an inflation of the total variance explained. Distortion of both additive properties by the structure coefficient is demonstrated in Table 2.4 using the same data. The communalities for the 24 tests calculated by summing the squared structure coefficients across the four factors, denoted as EFS 2 , consistently exceed the actual values. Likewise, the amount of total variance explained by each factor calculated by summing the squared structure coefficients along the 24 tests, denoted as ETS 2 , exceeds its corresponding values of the orthogonal model in Table 2.2. As a result, as shown by the last two rows of Table 2.4, the amount and the percentage of total variance explained by each factor are inflated, leading to an overestimated amount of modeled total variance (18.16 compared to the actual value of 11.03) as well as an overestimated percentage of the modeled total variance (75.7% compared to the actual percentage of 46%). 28 Table 2.4 Distortion of the Horizontal and Vertical Additive Properties in the Structure Coefficients Col.# 2 3 4 5 6 7 8^9 10 11 h 2 S S2 EFS2 F1 F2 F3 F4 F1 F2^F3 F4 Ti .47 .42 .68 .31 .35 .18 .47^.10 .12 .86 T2 .28 .21 .50 .09 .16 .05 .25^.01 .03 .33 T3 .22 .26 .45 .19 .14 .07 .20^.04 .02 .33 T4 .43 .20 .63 .27 .33 .04 .40^.07 .11 .62 T5 .70 .83 .38 .32 .22 .68 .14^.10 .05 .97 T6 .68 .82 .43 .31 .35 .68 .19^.10 .12 1.08 T7 .77 .87 .37 .27 .23 .75 .14^.07 .05 1.01 T8 .56 .74 .45 .31 .32 .55 .20^.10 .10 .95 T9 .72 .85 .48 .30 .33 .72 .23^.09 .11 1.15 T10 .64 .19 .10 .76 .29 .04 .01^.58 .08 .71 T11 .45 .40 .35 .65 .43 .16 .12^.42 .19 .88 T12 .45 .21 .32 .66 .23 .04 .10^.43 .05 .63 T13 .41 .29 .48 .58 .29 .08 .23^.34 .09 .74 T14 .45 .23 .22 .18 .64 .05 .05^.03 .41 .55 T15 .34 .07 .24 .16 .56 .00 .06^.03 .31 .40 T16 .42 .30 .52 .30 .60 .09 .27^.09 .36 .81 T17 .38 .16 .17 .41 .57 .02 .03^.17 .32 .54 T18 .26 .21 .29 .33 .49 .05 .08^.11 .24 .48 T19 .23 .33 .35 .26 .44 .11 .12^.07 .19 .49 T20 .38 .44 .58 .22 .42 .19 .33^.05 .17 .74 T21 .42 .43 .56 .52 .41 .18 .32^.27 .17 .93 T22 .44 .58 .59 .29 .36 .33 .34^.09 .13 .89 T23 .53 .55 .69 .40 .44 .30 .48^.16 .19 1.13 T24 .42 .53 .44 .52 .46 .28 .19^.27 .21 .95 ErS2 5.63 4.94^3.77 3.82 18.16 %(F) 23.47 20.57 15.71 15.91 75.66 Note. h 4 denotes the communality of a given test. EFS 2 is the sum of the squared structure coefficients for a given test across the four factors. ITS 2 is the sum of the squared structure coefficients for a given factor across the 24 tests. %(F) denotes the percentage of total variance of the 24 tests explained by a given factor. The difficulty of distributing factors' contribution to the observed variation by the two oblique coefficients has resulted in several alternative methods being proposed in the literature (e.g., Bentler, 1968; Cattell, 1962; White, 1966). Bentler (1968) proposed the use of a total factor contribution matrix, which is the product of an initial factor loading matrix and the least-squares 29 orthonormal approximation to the general transformation matrix. Although the total factor contribution can uniquely partition the oblique factor contribution, elements to produce such a matrix are hard to calculate and are not produced by the popular statistical packages. In addition, White (1966), by algebraically manipulating equation (1.1), noted that the product of the pattern coefficients and the structure coefficients could additively partition the observed variance. However, his work was entirely algebraic and provided no axiomatic principles to justify its use. As a result, these early attempts did not draw much attention from other scholars and users of factor analysis. Alternatively, practitioners have been avoiding these complexities by placing an orthogonal constraint to retain the additive properties and interpretational simplicity even when the factors are believed to be correlated by theory or shown by empirical data (Browne, 2001; Conway & Huffcutt, 2003; Fabrigar et al., 1999; Harman, 1967; Henson & Robert, 2006; Horst, 1965; Kieffer, 1998; Nunnally, 1978; Preacher & MacCallum, 2003; Rummel, 1970). 3. Inappropriateness of Cut-off Rules Another complexity arising from factor obliquity is that the traditionally suggested rules for loading cut-offs may not be appropriate for the oblique coefficients. Most rules for a meaningful loading such as 0.3 or 0.4 were suggested under the premise of orthogonality. For example, the rules suggested by Comrey and Lee (1992) were based on the premise of the horizontal additive property that holds only for unidimensionality or orthogonality. Their cut-off criteria were suggested for orthogonal loadings of which the squares represent the observed variance explained by the factors. As we have shown, the premise of a horizontal additive property is distorted by the factor obliquity. A minimum factor loading of 0.32 was considered by Comrey and Lee (1992) as practically significant because it contributes to 10% (0.32 x 0.32) of the variation in the observed variable. However, a pattern or structure coefficient of the same magnitude does not necessarily contribute to the same amount of the observed variance due to 30 the distortion of the horizontal additive property. Comrey and Lee's criteria for being practically significant are invalid when the factors are oblique. Furthermore, the conventional cut-off criteria may not be appropriate, in particular, for the pattern coefficient, because most of the cut-off criteria were suggested for a correlational type of interpretation that is bounded within the range of 1 and —1. For example, the cut-offs suggested by Harman (1967) were developed based on the average of the sample Pearson correlation matrix. Also, Gorsuch's (1983) suggestion was based on doubling the critical value of statistical significance for the Pearson correlation. Both Harman and Gorsuch's rules were proposed for correlational and bi-directional interpretation, which may not be appropriate for the pattern coefficient that is, by nature, NOT a bi-directional correlation. Furthermore, the pattern coefficients, like beta-weights in a multiple regression, can occasionally exceed the bounds of 1 and —1 (Guertin & Bailey, 1970; Nunnally, 1978). Under the circumstance of simple redundancy, which is often desired and assumed in a typical multiple regression (Cohen, Cohen, West, & Aiken, 2003) where the factors are slightly to moderately correlated and contain only redundant information, three conditions should follow: (1) a pattern coefficient should be of the same sign as its corresponding structure coefficient, (2) the magnitude of the pattern coefficient should be less than its corresponding structure coefficient because it reflects the unique effect by removing the redundant relationships, and (3) because the structure coefficient is bounded within the range of —1 and 1, by condition (2), the pattern coefficient should also be bounded within —1 and 1. However, when the factors are highly redundant (i.e., multicollinear) and/or do not follow the simple redundancy relationship (e.g., displaying suppression effect), the conventional rules fall apart because the pattern coefficient may (1) be of the opposite sign to its corresponding structure coefficient, (2) be of a greater 31 magnitude than its corresponding structure coefficient, and even (3) exceed the structure bounds of —1 and 1 (Nunnally, 1978; Rummel, 1970) 3 . Table 2.5 shows an extreme example where a simple redundancy relationship does not hold, and the conventional cut-off criteria may be inappropriate. The data was retrieved from the 2003-2005 Wisconsin Longitudinal Study of psychological well-being. The construct of psychological well-being was measured by the 31 items of Ryff s Scales of Psychological Well- Being (RPWB; Ryff, 1989; Ryff & Keyes, 1995). Six factors were extracted to reflect the six theoretical dimensions: autonomy (AU), environmental mastery (EM), personal growth (PG), positive relations with others (PR), purpose in life (PL), and self-acceptance (SA) (See Appendix B for a detailed description of the six theoretical dimensions). First, observe that there is a distinctive discrepancy between the pattern and the structure coefficients. To make our point, pattern coefficients that exceed their corresponding structure coefficients are highlighted in italic face, those that exceed the structure bounds of-1 and 1 are highlighted in bold face, and those of opposite sign to the structure coefficients are highlighted with an underline. It is fascinating to observe that none of the 31 items actually satisfies all three conditions and strictly follows a simple redundancy relationship as a result of the high correlations among some of the factors as shown at the bottom of Table 2.5. This example demonstrates how the correlation and orthogonality based cut-off criteria may fall apart for interpreting the pattern coefficients. 3The impact of multicollinearity and suppression on the partial coefficient has been deliberated in the literature of multiple regression (e.g., Cohen, Cohen, West, & Aiken, 2003). The task of resolving multicollinearity and suppressor effect in factor analysis is beyond the focus of this dissertation, although the presence, types, and mechanism of the suppression effect will be discussed throughout this dissertation. 32 Table 2.5 Demonstration of Inappropriateness of Traditional Rules for Interpreting the Pattern Matrix Item h 2 P S F1 F2 F3 F4 F5 F6 F1 F2 F3 F4 F5 F6 AU1 .52 .04 .13 .80 -.17 -.37 .13 .49 -.34 .58 .32 -.28 .31 AU2 .39 -.17 .12 1.05 -.26 -.14 -.18 .28 -.21 .47 .20 -.15 .02 AU3 .44 -.30 .07 .94 .07 .07 -.15 .41 -.44 .62 .48 .18 .18 AU4 .21 -.14 .12 .69 -.08 .11 -.01 .28 -.29 .42 .30 .15 .19 AU5 .27 .37 .24 .44 -.25 -.15 .15 .40 -.25 .40 .22 -.16 .29 Em .41 .50 .13 .11 .10 -.34 .05 .55 -.38 .47 .37 -.25 .29 EM2 .58 .84 .18 -.12 .14 .26 -.02 .72 -.66 .60 .67 .34 .47 EM3 .55 1.29 .18 -.31 -.15 .01 -.09 .69 -.50 .46 .46 -.04 .34 EM4 .31 .63 .23 .09 .04 -.13 .00 .52 -.37 .44 .37 -.10 .28 EM5 .42 .55 .06 .01 .09 .26 -.04 .59 -.58 .54 .59 .34 .39 PG1 .40 .44 .01 .00 .25 -.36 -.08 .55 -.42 .46 .42 -.21 .22 PG2 .55 .09 -.11 -.09 .73 -.23 -.05 .65 -.65 .60 .70 .11 .35 PG3 .42 -.01 .07 .06 .66 -.57 -.05 .42 -.32 .40 .37 -.32 .13 PG4 .43 .09 .17 -.22 .98 -.08 -.12 .49 -.51 .45 .63 .21 .23 PG5 .37 .16 .16 -.02 .63 -.46 .01 .47 -.36 .41 .41 -.24 .21 PR1 .64 .06 -1.15 -.12 -.29 .07 -.15 .63 -.77 .56 .63 .31 .34 RR2 .62 -.23 -1.43 -.12 -.29 -.11 -.19 .54 -.72 .49 .55 .19 .24 PR3 .39 -.13 -.69 -.12 .09 -.57 .06 .41 -.39 .33 .30 -.30 .20 PR4 .50 -.06 -1.11 -.11 -.30 .03 -.06 .54 -.68 .48 .54 .27 .34 PR5 .46 -.24 -1.08 -.24 .06 -.11 -.08 .48 -.64 .43 .54 .21 .27 PL1 .68 1.24 .18 -.12 .02 -.11 -.59 .63 -.44 .47 .43 -.19 -.03 PL2 .50 .44 .03 -.12 .47 .03 -.06 .66 -.64 .58 .68 .23 .38 PL3 .31 -.14 .29 -.01 .95 -.06 -.04 .36 -.39 .39 .52 .21 .20 PL4 .50 .21 .03 -.23 .83 -.07 -.15 .58 -.60 .52 .68 .22 .27 PL5 .13 -.17 -.01 -.13 .57 .02 -.06 .17 -.24 .18 .31 .20 .09 PL6 .24 .27 -.10 .03 .06 -.22 .09 .46 -.38 .39 .35 -.09 .29 SA1 .68 1.22 .08 .00 -.24 -.11 -.36 .72 -.52 .56 .46 -.18 .17 SA2 .35 .69 -.15 -.11 -.16 -.20 -.03 .54 -.42 .41 .35 -.15 .27 SA3 .53 .79 .16 -.25 .23 .19 .06 .68 -.62 .54 .64 .29 .48 SA4 .46 1.00 .05 -.28 -.25 .05 .17 .62 -.48 .42 .42 .05 .48 SA5 .55 .31 -.33 .14 -.20 -.41 .22 .63 -.51 .54 .41 -.25 .45 F2 -.87 F3 .85 -.82 F4 .83 -.90 .82 F5 .14 -.37 .20 .43 F6 .60 -.56 .53 .54 .28 Note. hi: communality; P: pattern matrix; S: Structure matrix. AU: autonomy; EM: environmental mastery; PG: personal growth; PR: positive relations with others; PL: purpose in life; SA: self-acceptance Note. Pattern coefficients that exceed their corresponding structure coefficients are highlighted in italic face, that exceed the structure bounds of -.1 and 1 are highlighted in bold face, and that of opposite sign to the structure coefficients are highlighted with an underline. 33 To our knowledge, the widely accepted cut-off rules, which are based on the premises of correlation, orthogonality, and simple redundancy relationship, have rarely, if ever, been formally examined in terms of their appropriateness for the oblique coefficients. Neither have there been different cut-off criteria suggested for the oblique coefficients. 2.3 Review of Practices and Recommendations for Interpreting Multidimensional Factor Models Orthogonal or Oblique? Although factor obliquity is more justified both theoretically and empirically, three major interpretational difficulties inhibit its use as revealed in Section 2.2. To reiterate, these difficulties are: (1) the inconsistency in the pattern and structure coefficients and choice of which to trust, (2) the distortion of the additive properties, and (3) the inappropriateness of the traditional rules suggested for orthogonal loadings. Nunnally (1979) stated, The supposed advantages of oblique rotations are mainly conceptual rather than mathematical. The author has mild preference for orthogonal rotations, because (1) they are so much simpler mathematically than oblique rotations, (2) there has been numerous demonstrations that the two approaches lead to essentially the same conclusion about the number and the kinds of factors inherent in a particular matrix of correlation. (p. 376) Although orthogonality simplifies the interpretation via additive properties, many investigators have concerns about sacrificing accuracy for the sake of mathematical simplicity. For example, Thurstone (1947) stated, "In developing the factorial methods we have insisted that the methods must not impose orthogonality on the fundamental parameters that are chosen for factorial description, even though the equations are thereby simplified in that the cross products [factor correlation] vanish" (p. 140). Also, Cattell (1952) stated: 34 Our tolerance of the inconveniences of oblique factors [...] depends on our belief that in explaining and predicting natural events it is actually more convenient in the long run to follow nature than attempt to force upon it some artificial over-simplication. [....] And to insist on orthogonality of factor is indeed mistaking means for ends, since these simpler mathematical devices after all are only means to discovering and expressing whatever is in nature itself. (p. 122-123) Without a doubt, interpretational difficulties due to factor obliquity may have hindered some day-to-day practices. In particular, there has been an under-utilization of the horizontal interpretation in the recent literature as oblique rotations became more popular. We believe that such under-utilization may be due to the distortion of the horizontal additive property. As mentioned earlier, horizontal interpretation bears the fundamental theory of factor analysis and provides powerful evidence for test validation. It would be a misfortune to foresee a decreasing use of horizontal interpretation as a result of increasing use of oblique rotations. The point to highlight here is that both orthogonal and oblique rotation methods have inherent advantages and disadvantages in interpreting a multidimensional factor solution. Researchers often are forced to choose between (a) constraining factor orthogonality for interpretation simplicity even if the factors are believed to be correlated and (b) allowing obliquity and tolerating the interpretational complexities. Our stance is that if there is a true association among the factors, which is highly plausible on both theoretical and empirical bases, fixing the correlations among the factors to be zero could produce biased estimates of the coefficients and lead to misleading interpretations. From a statistical modeling perspective, this is clearly a scenario of model misspecification — i.e., fitting an orthogonal factor model when the data are in fact oblique. If so, a researcher is trying to make sense of the oblique data based on the biased parameters produced by the orthogonal model. Simply put, oblique and orthogonal 35 models are distinct models (Gorsuch, 1983, p. 33); one should not confuse the orthogonal loadings with the oblique coefficients. Pattern, Structure, Both, or Alternatives? As discussed earlier, when the factors are allowed to be oblique, there is always some discrepancy between the pattern and its corresponding structure coefficient. Researchers, who recognize this inconsistency, often encounter the question: "Which coefficient should I interpret and trust if different conclusions are reached?" Many researchers prefer the pattern coefficients (e.g., Cattell, 1952; Harman 1967). Investigators like Thurstone always used the pattern coefficients (Gorsuch, 1983). The pattern coefficient is preferable for the following reasons. First, the pattern coefficient is rooted in the fundamental theory of factor analysis as shown in equation (1.1); it reflects the mathematical meaning of factor analysis: i.e., change in the observed score per unit change in a given factor score isolating the effect of other factors in the model. Second, the pattern coefficient is believed to capture the simple structure better than does the structure coefficient, because it reflects the unique effect (Cattell, 1952; Rummel, 1970). Furthermore, unlike the structure coefficient for which both additive properties are distorted, the pattern coefficient is still warranted for vertical interpretation. Other researchers, however, believe that the structure coefficient should be the focus of interpretation (e.g., Comrey & Lee, 1992; Gorsuch, 1983; Horst, 1965). Comrey and Lee (1992) contended that the meaning of factors should be defined by how similar the observed variables and the factor are. That similarity is best indicated by the bi-directional correlation. Also, Gorsuch (1983) argued that pattern coefficients capture only the directional relationship; they do not show the relationships of variables to the factors, but rather of the factors to the variables (Gorsuch, 1983; Kim & Muller, 1978). Gorsuch argued that, although the pattern coefficient bears the fundamental theory of factor analysis, the substantive nature of the factors should be 36 already known to the researchers so as to make meaningful directional interpretation. Nonetheless, such a condition may be implausible or unrealistic for many EFA contexts and purposes. Recently, a few researchers have responded with support for Gorsuch's call for interpreting the structure coefficient (Courville & Thompson, 2001; Graham, Guthrie, & Thompson, 2003; Henson & Roberts, 2006; Henson, 2002; Kieffer, 1999; Thompson, 1997; Thompson & Daniel, 1996). The following summarizes these researchers' reasons for promoting the structure coefficients: ■ The meaning of the unknown factors is best understood by a bi-directional relationship, which shows how much the factors and the observed variable share in common. ■ Most investigators are more familiar with correlational-type coefficients, and are more accustomed to interpreting the practical importance of correlation, of which the range is always bounded between —1 and 1. ■ It is important to interpret the inflated relationship in the structure coefficient because the correlations among the factors are of theoretical importance. The pattern coefficients systematically exclude overlap among the factors and represent only their unique contributions even when the overlap is of theoretical importance. ■ The structure coefficients yield a relatively model-independent estimate of relationships. That is, regardless of what other factor occurs in the next study for a given data, the variable should correlate at the same level with a particular factor. In contrast, the pattern coefficients are more model-dependent. ■ In some situations, a variable may be regarded as unimportant because of a small pattern coefficient, yet the structure coefficient may reveal a substantial relationship between the variable and the factor. ■ Examining the structure coefficient can detect a potential suppression effect. Such effect cannot be detected if only the pattern coefficient is examined. 37 For these reasons, it is argued that interpreting the pattern coefficient itself is insufficient for oblique factor interpretation and it could lead to incorrect conclusions; a meaningful interpretation of an oblique factor solution should always examine both coefficients by juxtaposing them (Graham et al. 2003; Henson & Roberts, 2006; Kieffer, 1999; Thompson & Daniel, 1996; Rummel, 1970). Gorsuch (1983) concluded, "Indeed, proper interpretation of a set of factors can probably only occur if at least S and P are both examined" (p. 208). Our stance on this is that oblique coefficients, regardless the pattern or the structure coefficient, have their interpretational difficulties as we explained and demonstrated in Section 2.2. To iterate, these difficulties are (1) the inconsistency in the pattern and structure coefficients and the choice of which to report and trust, (2) the distortion of the additive properties, and (3) the inappropriateness of traditional cut-off rules. To our knowledge, interpretational difficulty arising from problem (3) has not been heeded in the literature. Although interpretational difficulties arising from problems (1) and (2) have been addressed in the backstage of more technical literature (e.g., Gorsuch, 1983; Harman, 1976; Thurstone, 1947), most of the applied researchers have not attended to these problems (Graham et al., 2003; Tabachnik & Fidell, 2007; Thompson, 1997). It is only recently that researchers brought these problems to the spotlight (Courville & Thompson, 2001; Graham, et al., 2003; Henson, 2002; Henson & Roberts, 2006; Thompson, 1997). In a review of 60 factor analyses, Henson and Roberts (2006) reported that 23 applied analyses (38%) used oblique rotation, among them, 11 reported only the pattern matrix, four reported only the structure matrix, and one reported both. Seven did not indicate which matrix was reported. This review revealed that orthogonal rotation was still preferred among the applied researchers. When an oblique solution is chosen, reporting only the pattern matrix was the most common practice, followed by reporting only the structure matrix. Few, if any, attended to both coefficients and the possible inconsistency between them. 38 Fortunately, the more methodology-oriented investigators listed in Table 2.1 on page 11- 12 were more attentive to these issues. Column four of Table 2.1 indicates that among the 40 scholarly works, 27 (67.5%) explicitly acknowledged the inconsistency between the pattern and the structure coefficients. Column five of Table 2.1 shows that among the 40 scholarly entries, sixteen (40%) used both coefficients, 12 (30%) reported only the pattern coefficient, six did not clearly indicate which matrix was reported (15%), five (12.5%) chose to report only the orthogonal solution, and one (2.5%) reported only the structure coefficients. The last column shows that among the 40 entries, only 11 (27.5%) actually juxtaposed both matrices. It appears that the methodology-oriented investigators were more inclined to report either or both coefficients at their own discretion or upon the feedback of editors and reviewers, compared to the applied researchers and practitioners. Nonetheless, the pattern coefficient was still the preference if only one oblique coefficient was chosen for interpretation. It is regrettable that 13 out the 40 scholarly entries did not explicitly acknowledge the existence of two types of coefficients despite their methodological appeal. For those who did, some ended the discussion by simply defining and distinguishing the two types of coefficients, and provided no further description about the interpretational difficulties or solution for resolving them. Summarizing the practice of both applied and methodological researchers, we conclude that the pattern coefficient was the preference for interpretation if factor obliquity was allowed. The pattern coefficient was occasionally accompanied by the structure coefficient. Our stance on this matter is that the advocacy for interpreting both pattern and structure coefficient is conceptually and technically legitimate. We agree with the recent literature that researchers should notice the different messages P and S deliver and interpret them in concert; missing either information could lead to fallible conclusions. However, the current recommended approach of juxtaposing both coefficients does not actually resolve the three problems of oblique coefficients. Instead, by means of juxtaposing both, it brings the investigators back to the 39 original problems. In a cynical sense, the problems seem to have doubled because now both coefficients should be examined. The following statement by Bentler (1968) perfectly depicts our reflection on the review in Chapter Two and gives a preview for our discussion in Chapter Three, While the pattern matrix P may be useful in evaluating simple structure, for a given oblique solution, it cannot evaluate simple structure in the contribution of factors to variables (or vice versa) since the variables' variance accounted for by oblique factors is a complex function of the pattern and structure matrices. (p. 490) The next chapter proposes a new method for interpreting oblique factor models. The new method not only synergizes the information in both P and S but also resolves most of the interpretational difficulties inherent in factor obliquity. In essence, the new method makes the following statement by Thurstone (1947) about orthogonal solution true for the oblique case, "We have then the theorem that each factor loading for orthogonal factors is the square root of the variances of test j attribute to the factor m" (p.73). 40 References Achen, C. H. (1982). Interpreting and using regression. Beverly Hills, CA: Sage. Archer, C. 0., & Jennrich, R. J. (1973). Standard errors for rotated factor loadings. Psychometrika, 38, 581-592. Basilevsky, A. (1994). Statistical factor analysis and related methods. New York: Wiley. Bentler, P. M. (1968). A new matrix for the assessment of factor contributions. Multivariate Behavioral Research, 3, 489-494. Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110, 203-219. Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review, 111, 1061-1071. Bring, J. (1994). How to standardize regression coefficients. American Statistician, 48, 209-213. Bring, J. (1996). A geometric approach to compare variables in a regression model. The American Statistician, 50, 57-62. Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: The Guilford Press. Browne, M. W. (2001). An overview of analytic rotation in exploratory factor analysis. Multivariate Behavioral Research, 36, 111-150. Burt, C. (1940). The factors of the mind. London: University of London Press. Burt, C. (1966). The early history of multivariate techniques in psychological research. Multivariate Behavioral Research, 1, 24-42. Cattell, R. B. (1952). Factor analysis. New York: Harper. Cattell, R. B. (1957). Personality and motivation: Structure and measurement. New York: World Book. Cattell, R. B. (1962). The basis of recognition and interpretation of factors. Educational and Psychological Measurement, 72, 667-697. Cattell, R. B. (1978). The scientific use of factor analysis in behavioral and life sciences. New York: Plenum Press. Church, J. T., & Burke, P. J. (1994). Exploratory and confirmatory tests of the big five and Tellegen's three- and four-dimensional models. Journal of Personality and Social Psychology, 66, 93-114. 41 Cliff, N., & Hamburger, C. D. (1967). The study of sampling errors in factor analysis by means of artificial experiments. Psychological Bulletin, 68, 430-445. Cohen, J. P., Cohen, S. G., West, L. S., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates. Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). Hillsdale, NJ: Erlbaum. Conway, J. M., & Huffcutt, A. I. (2003). A review and evaluation of exploratory factor analysis practices in organizational research. Organizational Research Methods, 6, 147-168. Courville, T., & Thompson, B. (2001). Use of structure coefficients in published multiple regression articles: (3 is not enough. Educational and Psychological Measurement, 61, 229-248. Cudeck, R. (2000). Exploratory factor analysis. In H. E. A. Tinsley & S. D. Brown (Eds.), Handbook of applied multivariate statistics and mathematical modeling (pp. 266-296). San Diego, CA: Academic Press. Cudeck, R., & O'Dell, L. L. (1994). Applications of standard error estimates in unrestricted factor analysis: Significance tests for factor loadings and correlations. Psychological Bulletin, 115, 475-487. Cureton, E. E., & d'Agostin, R. B. (1983) Factor analysis: An applied approach. New York: Erlbaum Associate. Ferguson, E., & Cox, T. (1993). Exploratory factor analysis: A user's guide. International Journal of Selection and Assessment, 1, 84-9. Ford, J. K., MacCallum, R. C., & Tait, M. (1986). The application of exploratory factor analysis in applied psychology: A critical review and analysis. Personnel Psychology, 39, 291- 314. Fruchter, B. (1954). Introduction to factor analysis. Princeton: D. van Nostrand Company. Giri, N. C. (2004). Multivaraite statistical analysis (2nd ed.). New York: Marcel Dekker. Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Erlbaum. Graham, J. M., Guthrie, A. C., & Thompson, B. (2003). Consequences of not interpreting structure coefficients in published CFA research: A reminder. Structural Equation Modeling, 10, 142-152. Greenland, S., Schlesselman, J. J., & Criqui, M. H. (1986). The fallacy of employing standard regression coefficients and correlations as measures of effect. American Journal of Epidemiology, 123, 203-208. 42 Guertin, W. H., & Bailey, J. P. (1970). Introduction to modern factor analysis. Ann Arbor: Edwards Brothers. Hair, J. F. Jr., Babin, B., Anderson, R. E., Tatham, R. L., & Black, W. C. (2006) Multivariate data analysis (6th ed.). Upper Saddle River, NJ: Prentice Hall. Harman, H. H. (1967). Modern factor analysis. Chicago: University of Chicago Press. Harman, H. H. (1976). Modern factor analysis (3rd ed.). Chicago: University of Chicago Press. Hendrickson, A. E., & White, P. 0. (1964). Promax: A quick method for rotation to oblique simple structure. British Journal of Statistical Psychology, 17, 65-70. Healy, M, J. R. (1990). Measuring importance. Statistics in Medicine, 9, 633-637. Henryeson, S. (1950). The significance of factor loadings. British Journal of Psychological Statistics, 3, 159-165. Henson, R. K. (2002, April). The logic and interpretation of structure coefficients in multivariate general linear model analyses. Paper presented at the annul meeting of the American Educational Research Association, New Orleans, LA. Henson, R. K., & Roberts, J. K. (2006). Use of exploratory factor analysis in published research: Common errors and some comment on improved practice. Educational and Psychological Measurement, 66, 393-416. Holzinger, K. J., & Swineford, F. (1939). A study in factor analysis: The stability of a bi-factor solution. Supplementary Educational Monographs. Chicago: University of Chicago, Horn, J. L. (1967). On subjectivity in factor analysis. Educational and Psychological Measurement, 27, 811-820. Horst, P. (1965). Factor analysis of data matrices. New York: Holt, Rinehart & Winston. Hoyle, R. H., & Duvall, J. L. (2004). Determining the number of factors in exploratory and confirmatory factor analysis. In D. Kaplan (Ed.), The SAGE handbook of quantitative methodology for social sciences (pp. 301-315). Thousand Oaks, CA: Sage. Humphreys, L. G., Ilgen, D., McGrath, D., & Montanelli, R. (1969). Capitalization on chance in rotation of factors. Educational and Psychological Measurement, 29, 259-271. Jennrich, R. I. (1973). Standard errors for obliquely rotated factor loadings. Psychometrika, 38, 593-604. Jennrich, R. I., & Sampson, P. F. (1966). Rotation for simple loadings. Psychometrika, 31, 313- 323. Johnson, R. A., & Wichern, D. W. (2002). Applied multivariate statistical analysis. New York: Prentice Hall. 43 Kaiser, H. F. (1958). The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23, 187-200. Kelley, T. L. (1927). Interpretation of educational measurements. Yonkers, NY: World Book. Kenny, D. A., & Kashy, D. A. (1992). Analysis of the multitrait-multimethod matrix by confirmatory factor analysis. Psychological Bulletin, 112, 165-172. Kieffer, K. M. (1998). Orthogonal versus oblique factor rotation: A review of the literature regarding the pros and cons. Paper presented at he Annual Meeting of the Mid-South Educational Research Association, New Orleans. Kim, J., & Mueller, C. W. (1978). Introduction to factor analysis. Beverly Hills, CA: Sage. Kline, P. (1994). An easy guide to factor analysis. London: Routledge. Lawley, D. N., & Maxwell, A. E. (1971). Factor analysis as a statistical method. New York: American Elsevier. Loehlin, J. C. (1998). Latent variable models: An introduction to factor, path, and structural analysis. Hillsdale, NJ: Lawrence Erlbaum Associates. McDonald, R. (1985). Factor analysis and related methods. Hillsdale, NJ: Erlbaum. Mulaik, S. A. (1972). The foundation of factor analysis. New York: McGraw-Hill. Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill. Pett, M. A., Lackey, N. R., & Sullivan, J. J. (2003). Making sense of factor analysis: The use of factor analysis for instrument development in health care research. Thousand Oaks, CA: Sage. Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift's electric factor analysis machine. Understanding Statistics, 2, 13-43. Press, S. J. (1982). Applied multivariate analysis: Using Bayesian and frequentist methods of inference. New York: Krieger. Rummel, R. J. (1970). Applied factor analysis. Evanston, IL: North-western University Press. Russell, D. W. (2002). In search of underlying dimensions: The use (and abuse) of factor analysis in personality and social psychology bulletin. Personality and Social Psychology Bulletin, 28, 12, 1629-46. Ryff, C. (1989). Happiness is everything, or is it? Journal of Personality and Social Psychology, 6, 1069-1081. 44 Ryff, C. D., & Keyes, D. L. M. (1995). The structure of psychological well-being revisited. Journal of Personality and Social Psychology, 69, 719-727. Spearman, C. (1904). General intelligence, objectively determined and measured. American Journal of Psychology, 15, 201-293. Stevens, J. (1996). Applied multivariate statistics for the social sciences. Mahway, NI: Erlbaum. Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston: Allyn and Bacon. Thomas, D. R., Zhu, P. C., & Decady, Y. J. (2007). Point estimates and confidence intervals for variable importance in multiple linear regression. Journal of Educational and Behavioral Statistics, 32, 61-91. Thompson, B. (1997). The importance of structure coefficients in structural equation modeling confirmatory factor analysis. Educational and Psychological Measurement, 57, 5-19. Thompson, B., & Borrello, G. M. (1985). The importance of structure coefficients in regression research. Educational and Psychological Measurement, 45, 203-209. Thompson, B., & Daniel, L. G. (1996). Factor analytic evidence for the construct validity of scores: A historical overview and some guidelines. Educational and Psychological Measurement, 56, 197-208. Thurstone, L. L. (1947). Multiple factor analysis. Chicago: University of Chicago Press. Timm, N. H. (2002). Applied multivariate analysis. New York: Springer. Tucker, L. R., & Lewis, C. (1973). A reliability coefficient for maximum likelihood factor analysis. Psychometrika, 38, 1-10. White, 0. (1966). Some properties of three factor contribution matrices. Multivariate Behavioral research, 1, 373-377. Yates, A. (1987). Multivariate exploratory data analysis. Albany: State University of New York Press. Zumbo, B. D. (2007). Validity: Foundational issues and statistical methodology. In C. R. Rao & S. Sinharay (Eds.), Handbook of statistics, Vol. 26: Psychometrics, (pp. 45-79). 45 Chapter Three: Pratt's Importance Measures in Oblique Exploratory Factor Analysis This chapter introduces a new method of Pratt's importance measures for assisting in interpreting an oblique factor model.' The method of Pratt's importance measures was originally used to order the importance of a set of independent variables in a linear multiple regression. In this chapter we show that the use of Pratt's measures in factor analysis can resolve three interpretational difficulties arising from factor obliquity. First, it integrates the information in both the pattern and structure coefficients, hence there is no need to choose which oblique coefficients to interpret. Second, it restores the properties of horizontal and vertical addition while allowing factors to be oblique. Third, it resolves, in part, the problem of traditional rules for assessing a meaningful relationship between an observed variable and a factor. This chapter is organized as follows: Section 3.1 describes the original use of Pratt's importance measures in a linear multiple regression, Section 3.2 provides the rationale for adapting Pratt's importance measures to factor analysis, Section 3.3 demonstrates an example of using Pratt's measures in EFA for continuous data, and finally Section 3.4 demonstrates an example of using Pratt's measures in EFA for categorical data. 3.1 The Use of Pratt's Importance Measures in Linear Multiple Regression Because the proposed methodology is an extended use of linear multiple regression, this section describes the role of Pratt's measures in ordering the importance of independent variables in a multiple regression. Once a regression model is chosen, the next query often is: "Which 'A version of this chapter will be submitted for publication. Wu, A. D., Zumbo, B. D., & Thomas, R. D. Pratt's importance measures in exploratory factor analysis. 46 independent variable contributes more to the variation of the dependent variable?" Numerous measures of relative importance have been proposed (see the references provided by Azen & Budescu, 2003; Bring, 1996; Kruskall, 1987; Pratt, 1987). Among these measures, Pratt (1987) used an axiomatic approach to deduce the importance of an independent variable and showed that his unique measure could be expressed as the product of its standardized regression coefficient and its simple correlation with the dependent variable. Pratt justified the measure using an axiomatic approach based largely on symmetry and invariance to linear transformation. He showed that his measure is unique, subject, of course, to his axioms. Thomas, Hughes, and Zumbo (1998) provided an intuitive derivation of a standardized version of Pratt's measures based on the geometry of least squares, and continued to refer to it as "Pratt's measures" in recognition of Pratt's theoretical justification. Details about the geometry of least squares, as it applies to Pratt's measures, can be found in Thomas (1992) and Thomas et al. (1998). For the purpose of this dissertation, only the necessary applicative procedures are described to show the use of Pratt's measures in multiple regression and factor analysis. A detailed axiomatic discussion of Pratt's measures can be found in Pratt (1987), Thomas (1992), Thomas et al. (1998), and Thomas and Zumbo (1996). It is crucial to define what "importance" means by Pratt's measures. Pratt refers to importance as the proportion of the explained variance (R-squared) to which an independent variable contributes, relative to the other independent variables in the model; therefore, one would more accurately refer to it as relative importance. Pratt justified the rule whereby relative importance is equated to variance explained, provided that explained variance attributed to an independent variable is the product of the population beta-weight and the population correlation of that independent variable and the dependent variable. Despite having been criticized, this definition is still widely used in the applied literature (e.g., Green, Carroll, & DeSarbo, 1978). 47 As we will show below, an additional feature of Pratt's measure is that it allows the importance of a subset of variables to be defined additively, as the sum of their individual importance irrespective of the correlation among the independent variables. Other commonly used measures (e.g., the standardized beta-weights, the t-values, unstandardized b-weights) do not allow for an additive definition and are problematic with correlated independent variables. The following description unpacks the meaning of Pratt's relative importance measures. Consider a linear multiple regression with one response variable of the form Y^X + 62 X2 ± ^Xp U,^ (3.1) where f3p is the estimate of the standardized regression coefficient for the p th independent variable, p=1...w, and U is the error term that is uncorrelated with Xp, and E (U) = 0. Pratt's measure, dp, for the relative importance of the pth independent variable included in the regression model is given by (3 13 d p —  P P  R 2 (3.2) where 13p is the estimate of simple Pearson's product moment correlation between the independent variable X and the response variable Y. Because E 13PP = R 2 , it follows P= 1 that E^- 1, hence E dp = 1, a result that was illustrated by Thomas et al.'s (1998) p= 1 R P=1 geometric derivation. The importance of the independent variables then can be ordered by dp accordingly. Thomas (1992) also suggested that, as a general rule, if the dp < 1/(2p), namely half the average importance, then the corresponding independent variable can be considered unimportant. In addition, if the researcher is interested in the joint importance of a subset of variables for some theoretical reason, she or he can simply sum up the individual importance of the subset because 48 of the additive property of Pratt's measures. For example, the joint importance measure of independent variable X i and X2 is equal to d1 + d2. The appropriateness of Pratt's measures has been criticized because it occasionally produces importance values beyond the logical range of zero to one. Negative Pratt's measures can occur, which is a counterintuitive characteristic for importance interpretation. Small out of bound Pratt's measures could be a result of chance capitalization since lip and 13p are both sample estimates but Pratt's measures are population-defined. It is worth noting that large negative Pratt's measures can occur if 13p and Pp are of different signs, a scenario referred to as "negative suppression effect" (Conger, 1974; Lancaster, 1999). Also, Thomas et al. (1998) demonstrated that negative Pratt's measures are often associated with multicollinearity. Suppression effect and multicollinearity are two complex situations when all other measures of variable importance display interpretational difficulty (Thomas et al., 1998). Thomas, Zhu, Zumbo, & Dutta (2006; in press) also reminded researchers that some regression models are so complex that no single measure of importance satisfies Pratt's axioms. For the purpose of this dissertation, the discussion will not dwell on the causes of negative Pratt's measures. Rather, the point here is to prepare readers for expecting some negative values when Pratt's measures are later applied to factor analysis. To sum up, Pratt's measure dp is additively defined; it partitions the standardized variance accounted for by a regression model into non-overlapping parts that are attributable to each independent variable. The relative importance of the independent variables then can be ordered according to the values of dp. As a preview, the additive property of Pratt's measures in a linear multiple regression is analogous to that of a horizontal additive property under factor orthogonality as we explained in Chapter Two and under obliquity as we shall see in Chapter Three. 49 The following example demonstrates the use of Pratt's measures in ordering the importance of five variables in explaining grade eight students' (N= 8,912) mathematics performance in TIMSS (a.k.a., Trends in International Mathematics and Science Study). The five independent variables are (1) Parents' Education Level, (2) Mathematics Self-confidence, (3) Extra Lessons/Tutoring Time, (4) Computer Availability, and (5) Number of Books at Home. All five independent variables are measured on a quantitative scale, and are significant predictors for students' mathematics performance. Table 3.1 shows their inter-correlations. The R-squared value for the regression model with the five independent variables is 0.38, F(5, 6929 = 865.61, p< 0.001), which indicates that the independent variables explain 38.4% of the observed variance in mathematics achievement. Table 3.1 Correlation Matrix of the Five Independent Variables for TIMSS Mathematics Achievement A Parents' Education Level B Mathematics Self-concept C Extra Hour Tutoring Time D Computer Availability E Number of Books A B C D E 1.00 .13 -.06 .23 .36 1.00 -.16 .10 .12 1.00 -.05 -.09 1.00 .23 1.00 Table 3.2 lists the three building blocks for calculating Pratt's measures: (1) the R- squared value for the model, (2) the standardized regression coefficients, flp , for the five independent variables, and (3) the simple correlations between each of the independent variable and the dependent variable, 13p . Table 3.2 also shows the product term 1 03p and the resultant Pratt's measure cip for each independent variable. Note that the sum of the product terms 1 03p across the five independent variables is equal to .384, which is also the R-squared value for the regression model. Namely, the contribution of each of the independent variables to the R-squared value can be readily attributed to its corresponding value of ilpl3p . Also, Pratt's measures 50 partitioned the standardized R-squared value into five non-overlapping parts of 0.15, 0.31, 0.18, 0.09, and 0.27 that add up to 1.0. Because the importance measures now are non-overlapping and additive, one can order the relative importance by the size of dp. For the present example, Mathematics Self-confidence is the most important independent variable for mathematics performance relative to the other four independent variables chosen for this model. Computer Availability is the least important; in fact, this variable alone is considered unimportant because d4 = 0.09 < 1/(2p) = 0.1 (p = 5). In addition, if a researcher would like to know for some theoretical reason how important students' educational resources, defined as the number of books and the availability of computers at home, relate to their mathematics performance, he or she can simply sum up the two Pratt's measures for Number of Books and Computer Availability to yield a joint importance measure using the simple addition: d (4+5) = d4 + d5 = 0.09+0.27 = 0.36. Jointly, educational resources account for 36% of the explained variation. Table 3.2 Pratt's Measures for the Five Independent Variables for TIMSS Mathematics Achievement Independent Variables Pp 13p PpPp dp 1. Parents' Education Level .17 .34 .06 .15 2. Mathematics Self-Confidence .30 .40 .12 .31 3. Extra Lesson/Tutoring Time -.22 -.32 .07 .18 4. Computer Availability .12 .27 .03 .09 5. Number of Books .26 .40 .10 .27 R-squared & standardized R-Squared .38 1.00 Note. fa r : standardized regression coefficient; 13 p : simple correlation between the response variable and the independent variable; d p : Pratt's importance measure, p= 1...5. 3.2 The Rationale for Applying Pratt's Importance Measures to Factor Analysis How can one take advantage of the desirable properties of Pratt's measures discussed above and apply them to help enhance interpretability of an oblique EFA? This question can be answered by describing the connection between multiple regression and factor analysis. The 51 parallelism between the two statistical methods makes the adaptation of Pratt's measures to EFA possible and justifiable. Gorsuch (1983, p. 14) gave a broad-stroke description of the connection by framing both multiple regression and factor analysis under the umbrella of the multivariate linear model (MLM). Namely, both multiple regression analysis and factor analysis are special cases of the MLM. Using the notation provided by Gorsuch, the MLM with q dependent variables can be written as a general equation, Yq 13q1Xq1 + Pq2Xq2 +^ + 13qpXqp Uq ,^ (3.3) Yq is the score of the dependent variable q, Xqp is the independent variable p for the dependent variable q, Po is the standardized partial regression weight for the independent variable p on the dependent variable p, and Uq is the error term for the dependent variable q. One can easily see that dropping the subscript q in equation (3.3) simplifies the q simultaneous multiple regression equations to a single equation that looks identical to the multiple regression equation in (3.1). In spite of the similarity in the governing conceptual framework, these two methods have several differences in technical aspects. Gorsuch (1983) pinpointed that the major difference between a multiple regression and factor analysis lies in whether the scores of the three major elements of the MLM - the dependent variables, the independent variables, and the weights assigned to the independent variables, are known to the researchers prior to analyses (i.e., observed). If the scores for the dependent and independent variables are known and only the weights are to be estimated, the modeling technique is called multiple linear regression. If only the scores for response variables are known and the scores for the independent variables and the weights are to be estimated, then the multivariate modeling technique is called factor analysis. Hence, a factor analysis with q observed dependent variables can be written as, 52 Yq = f3q1Fq1 + (3q2Fq2 +  + OcipFqp + Lig (3.4) Note that equation (3.4) is identical to the fundamental theory of factor analysis given in equation (1.1) in Chapter One. The differences between equations (3.3) and (3.4) are that first, the independent variables now are denoted as Fqp and are constructed by accounting for the interrelationships of the Y q , and second, the weights 13qp for these factors are now termed factor loadings in the orthogonal case or the pattern coefficients in the oblique case. Bring (1996) and Thomas et al. (1998) used the geometry of least squares to interpret Pratt's measures for multiple regression. Applying the same least square regression concept to factor analysis can help to understand Pratt's measures in factor analysis. That is, each of the qth observed variable and the p common latent factors are represented as vectors in an N- dimensional vector space, where N is the sample size. A factor model for the q th observed variable is represented by the orthogonal projections of the q th observed variable onto the space spanned by the common latent factors. When the observed variable and the factors are standardized to have a mean of zero and variance of one, the qth fitted Y (i.e., ''Tc1) is represented algebraically by the weight vectors of the factor sum as in equation (1.2). The connection between multiple regression and EFA by the MLM framework makes the rationale for using Pratt's measures in EFA self-evident. That is, one can simply adapt Pratt's measures to order the importance oblique factors with regard to each observed variable via the additive property of Pratt's measures regardless of whether the factors are orthogonal or oblique. Not only that, as one will see in the following two EFA examples, Pratt's measures will also hold the vertical additive property, despite the factor model being oblique. Recall that one needs three building blocks in order to produce Pratt's measures: (1) the standardized partial regression coefficients, (2) the correlations between the response variables and the independent variables, and (3) the R-squared values. The goal of applying Pratt's 53 measures technique to EFA is to produce a Pratt's measure matrix, D, in which the elements are the importance measures of the p factors to the q observed variables. What are the three corresponding blocks in EFA for building a Pratt's measure matrix? As is evident from the discussion in Chapter Two, the three building blocks are: (1) the pattern matrix P of size qxp, in which the elements are the equivalents of the standardized partial regression coefficients in a multiple regression, (2) the structure matrix S of size qxp, in which the elements are the equivalents of the zero-order correlations between the dependent variable and the independent variables in a multiple regression, and (3) the vector of the communalities, in which the elements are the equivalents of R-squared value in a multiple regression. When an oblique rotation is selected, statistical software such as SPSS will produce an output with these matrices for calculating the Pratt's measure matrix. Also, SPSS will produce the factor correlation matrix, which indicates the correlations among the factors. Although this matrix is not needed for calculating the Pratt's measure matrix, the information in this matrix is often of great theoretical value but is omitted if orthogonal constraint is imposed. Once the three building blocks are identified in an EFA, one can apply Pratt's technique to transform the information in P and S into D. 3.3 A Demonstration of Pratt's Measures in EFA for Continuous Data Step-by-step, this section demonstrates how to use the three building blocks in factor analysis to obtain the Pratt's measure matrix by factor analyzing a continuous dataset. This demonstration will show how Pratt's measures can overcome the three interpretational problems arising from factor obliquity. However, before the demonstrative examples, it is crucial to acknowledge that Pratt's measures, like most other importance measures or loading cut-offs, are model dependent; i.e., they are defined relatively to the other factors chosen for a given model. 54 Thus, it is crucial that a researcher has made a sound decision about the dimensionality (i.e., number of factors to retain) prior to the application of the Pratt's measures method. As in Chapter Two, we use the data from Holzinger and Swineford's (1939) 24 psychological ability tests to demonstrate the application of Pratt's measures in EFA. The four oblique factors were obtained using the same extraction and rotation methods. Table 3.3 consists of the three building blocks required for producing the Pratt's measure matrix. Columns 1 through 4 consist of P, columns 6 to 9 of S, and the last column is a vector of the communalities h2 . The numbers in these three matrices are identical to those reported in Chapter Two but now are juxtaposed in one table for calculating and comparing to D. Calculation of the Pratt's Measure Matrix Using the three building blocks, the Pratt's measure matrix can be obtained by two simple steps. First, obtain a matrix, PS, the elements of which are the products of a given pattern coefficient and its corresponding structure coefficient as shown in columns 11 through 14 of Table 3.3. Under a simple redundant relationship, elements in PS represent the proportion of the variance in the tests that can be directly and uniquely attributed to each factor. They were derived by simply multiplying the corresponding pattern and structure coefficients for the four factors. For example, the product term of Fl for T1 is obtained by multiplying 0.06 (pattern) and 0.42 (structure), and is equal to 0.02, indicating that 2% of the variance of T1 can be uniquely attributed to Fl. The product terms of the other three factors can be obtained by the same procedure. 55 Table 3.3 Pattern, Structure, PatternxStructure, and Pratt's Measure Matrices, and Communalities for Holzinger & Swineford's (1939) Psychological Ability Data Col.^1 2 3 4^5 6 7 8 9^10 11 12 13 14 15 16 17 18 19 20 21 P S PS D h2 F1 F2 F3 F4 ZP2 F1 F2 F3 F4 IS 2 F1 F2 F3 F4 FPS F1 F2 F3 F4 EFD T1^.06 .65 .02 -.02 0.43 .42 .68 .31 .35 .86 .02 .45 .01 -.01 .47 .05 .95 .01 -.02 1.00 .47 T2^-.05 .62 -.12 -.08 0.41 .21 .50 .09 .16 .33 -.01 .31 -.01 -.01 .28 -.04 1.12 -.04 -.05 1.00 .28 T3^.03 .49 .04 -.15 .27 .26 .45 .19 .14 .33 .01 .22 .01 -.02 .22 .04 1.02 .03 -.09 1.00 .22 T4^-.23 .73 .03 .03 .58 .19 .63 .27 .33 .61 -.04 .46 .01 .01 .43 -.10 1.06 .02 .03 1.00 .43 T5^.90 -.07 .04 -.12 .82 .83 .38 .32 .22 .97 .74 -.03 .01 -.03 .70 1.06 -.04 .02 -.04 1.00 .70 T6^.84 -.05 -.03 .05 .71 .82 .43 .31 .35 1.08 .69 -.02 -.01 .02 .68 1.02 -.03 -.01 .02 1.00 .68 T7^.97 -.10 -.04 -.08 .96 .87 .37 .27 .23 .01 .84 -.04 -.01 -.02 .77 1.09 -.05 -.01 -.02 1.00 .77 T8^.71 .04 .00 .02 .51 .74 .44 .31 .32 .95 .53 .02 .00 .01 .55 .96 .04 .00 .01 1.00 .55 T9^.85 .05 -.06 -.01 .72 .85 .48 .30 .32 1.14 .72 .02 -.02 .00 .72 1.00 .03 -.02 .00 1.00 .72 T10 -.02 -.28 .88 .02 .84 .19 .10 .76 .29 .71 .00 -.03 .67 .01 .64 -.01 -.04 1.04 .01 1.00 .64 T11^.15 -.03 .54 .14 .33 .40 .35 .64 .43 .88 .06 -.01 .35 .06 .45 .13 -.03 .76 .13 1.00 .45 T12 -.10 .14 .70 -.13 .54 .20 .32 .66 .23 .63 -.02 .04 .46 -.03 .45 -.04 .09 1.02 -.07 1.00 .45 T13 -.06 .34 .51 -.10 .39 .29 .48 .58 .29 .74 -.02 .16 .29 -.03 .41 -.05 .40 .72 -.07 1.00 .41 T14 .07 -.15 -.15 .76 .62 .23 .22 .18 .64 .55 .02 -.03 -.03 .49 .45 .04 -.07 -.06 1.09 1.00 .45 T15 -.18 .05 -.09 .65 .46 .07 .24 .16 .56 .40 -.01 .01 -.02 .36 .34 -.03 .04 -.04 1.04 1.00 .34 T16 -.04 .31 -.04 .47 .32 .30 .52 .30 .60 .81 -.01 .16 -.01 .28 .42 -.03 .39 -.03 .67 1.00 .42 T17 -.06 -.21 .25 .58 .44 .16 .17 .41 .56 .54 -.01 -.03 .10 .33 .38 -.02 -.09 .26 .85 1.00 .38 118 -02 .03 .13 .43 .20 .21 .29 .33 .49 .48 .00 .01 .04 .21 .26 -.02 .03 .17 .82 1.00 .26 T19 .14 .10 .00 .33 .14 .33 .35 .26 .44 .49 .05 .03 .00 .15 .23 .20 .15 .00 .65 1.00 .23 T20 .17 .44 -.12 .18 .27 .44 .57 .22 .42 .74 .08 .25 -.03 .07 .38 .20 .67 -.07 .20 1.00 .38 T21^.09 .36 .31 .04 .23 .42 .56 .52 .41 .93 .04 .20 .16 .02 .42 .09 .48 .39 .04 1.00 .42 T22 .37 .38 -.04 .04 .28 .58 .58 .29 .36 .89 .21 .22 -.01 .01 .44 A9 .51 -.02 .03 1.00 .44 T23 .21 .52 .07 .05 .32 .55 .69 .40 .44 1.13 .12 .36 .03 .02 .53 .22 .69 .05 .04 1.00 .53 T24 .31 .05 .29 .17 .22 .53 .44 .52 .46 .95 .17 .02 .15 .08 .42 .39 .06 .36 .19 1.00 .42 F2^.55 ETPS^4.14^2.77 2.15 1.97 11.03 ETD^6.61 7.37 4.54 5.47 24.00 F3^.40 .43 %(F) 17.30 11.50 9.00 8.20 46.00 %(F) 27.60 30.70 18.90 22.80 100.00 F4^.40 .52 .47 Note. h 2 :communality; P: Pattern matrix; S: Structure matrix; PS: a matrix in which the elements are the products of a given pattern coefficient and its corresponding structure coefficient; D: Pratt's measure matrix; FPS is the sum of the elements in PS for a given test across the four factors; ETPS is the sum of the elements in PS for a given factor across the 24 tests; EFD is the sum of the elements in D for a given test across the four factors; ETD is the sum of elements in D for a given factor across the 24 tests; %(F) denotes the percentage of total variance of the 24 tests explained by a given factor. Note. The highest Pratt's measures are highlighted in bold. Pratt's measures that are not the most important measures and yet are not considered unimportant are underlined. 56 Second, calculate Pratt's importance measures by dividing the product terms by the communality as shown in columns 16 through 19 of Table 3.3. Pratt's measure of Fl for T1, for instance, is calculated by dividing the product term of Fl by the communality value, that is 0.02/0.47 = 0.05 indicating that 5% of the standardized common variance in T1 is uniquely due to Fl. Note that the sum of the four Pratt's measures is equal to 1 as shown in column 20 of Table 3.3 indicating that Pratt's measures partitioned the standardized communality into non- overlapping parts despite the factors being moderately correlated. This is crucial evidence that Pratt's measures work for Pearson's correlation matrix for continuous data in EFA. Applying the same procedures to all the other tests will produce the Pratt's measure matrix as shown in columns 16 to 19 in Table 3.3. Interpreting the Pratt's Measure Matrix Before interpreting Pratt's measure matrix D shown in columns 16 to 19 of Table 3.3, note that the highest Pratt's measures for each test was highlighted in bold. Also, importance measures dp < 1/(2p) = 0.125 (p = 4), were considered unimportant and can be ignored using Thomas' (1992) criterion. In addition, the Pratt's measures that were not the most important but were not considered as unimportant were underlined. There are two approaches for interpreting the Pratt's measure matrix depending on the purpose and stage of the factor analysis. Horizontal interpretation reads across the factors one test at a time. As explained earlier, horizontal interpretation is most appropriate when the substantive meaning of the factors is fairly well known, or when the emphasis of the interpretation is on making direct causal inferences of the factors on the tests. For the present example, the substantive meanings of the four-factors have been repeatedly verified and interpreted across empirical data and labelled as "Verbal", "Spatial", "Speed", and "Memory" (e.g., Harman, 1976; Preacher & MacCallum, 2003; Russell, 2002). The interpretation would emphasize how each factor influences the variation of a given 57 test; hence a focus on horizontal interpretation would be the more constructive and appropriate at this stage. Take T8 for example, Pratt's measures partitioned the communality into four additive parts that sum up to 1.00 (with rounding error): 0.96, 0.04, 0.00, and 0.01which are uniquely attributable to F1 to F4, respectively. Because of the horizontal additive property, one can conclude that 96% of the communality in T1 is attributable to Fl (Verbal), 4% to F2 (Spatial), 0% to F3 (Speed), and 1% to F4 (Memory). This is a statement that cannot be made by interpreting P or S alone or by juxtaposing both. Using the dp < 1/(2p) = 0.125 rule, F2, F3, and F3 could be considered unimportant because Fl dominates the contribution to the amount of communality. Observe that Pratt's measures yielded more mutually distinctive values than the pattern coefficients of 0.71, 0.04, 0.00, or 0.02 and the structure coefficient of 0.74, 0.44, 0.31, and 0.32 alone. Specifically, Pratt's measures transformed the pattern and structure coefficients into an even closer approximation of the simple structure. Also, by juxtaposing, the two oblique coefficients, would lead one to an inconsistent interpretation if the traditional cut-off of 0.3 were used. Vertical interpretation is most appropriate when the purpose of the factor analysis is to understand the substantive meaning of the factors or identify subscales among a set of items. It is achieved by reading along the tests for a given factor at a time. Assuming that the substantive nature of the four factors was unclear to the investigators, the meaning of unknown factors can be inferred by the common meaning of the cluster of tests that share the same factor as the most important contributor. For example, the meaning of F4 can be inferred by T14, T15, T16, T17, T18, and T19, which share F4 as their most important contributor. Note that the task of clustering the tests is greatly eased by examining Pratt's measures because they yielded considerably more distinctive values than the pattern or structure coefficients. 58 Of course, there are no reasons why one cannot make both the horizontal and vertical interpretation when needed. In fact, the two-directional approach will give a more complete understanding of the factor model. In addition to the improvement over the conventional interpretation, Pratt's measures have a unique advantage that cannot be achieved by the pattern and structure coefficients. Because Pratt's measures allow importance to be defined additively, researchers can directly add up importance measures of theoretical interest to them, and show how much two or more factors jointly contribute to the common variance of a particular test. For instance, the joint contribution of F 1 and F2 to T23 is equal to d1 + d2 = 0.22 + 0.69= 0.91. Fl and F2 jointly explained 91% of the communality of T23. How Pratt's Measures Resolve the Three Interpretational Problems of an Oblique Model As we have seen in Chapter Two, interpreting P or S individually or juxtaposing both have three inherent interpretational problems caused by factor obliquity. These problems can be resolved by the Pratt's methods as explained below. Problem One: The Dilemma of Choosing P or S The inconsistency problem between pattern and structure coefficients can be clearly observed by T21 in Table 3.3. Observe that Fl has a very small pattern effect of 0.09 on T21, but a moderate structure coefficient of 0.42 with T21. As discussed earlier, interpreting either the pattern or structure coefficient on its own is problematic and insufficient. Using the traditional cut-off of 0.3 or 0.4 for practical meaningful relationship as suggested in the literature, the pattern and structure coefficient would reach contradictory conclusions about whether a "meaningful" relationship exists between T21 and Fl. Furthermore, simply juxtaposing both coefficients as displayed in the first 10 columns of Table 3.3, as recommended in the current literature cannot solve the difficulty either. 59 Unlike the pattern and structure coefficients alone or simply juxtaposing both, the Pratt's measure for T2 and Fl, 0.09 (9%) is a correct representation of how much the standardized variation of T21 was accounted for uniquely by Fl. Namely, Pratt's measures transform the two oblique coefficients into one single index of variance explained by a factor. Thus, there is no longer a need to interpret P or S individually and encounter the dilemma of each interpretation leading to different conclusions. Problem Two: The Distortion of Horizontal and Vertical Additive Properties In Chapter Two, we explained and showed that, under obliquity, the horizontal additive property holds neither for the pattern nor for the structure coefficients, and the vertical additive property holds only for the pattern coefficient. Here, we see that Pratt's measures restore the horizontal and vertical additive properties while allowing factors to be oblique. When the horizontal additive property is distorted by factor obliquity, neither the pattern coefficient nor the structure coefficient on its own can properly partition the communality of an observed variable. However, the product of the pattern and structure coefficients can. This property is demonstrated by the sum of the products across the four factors (denoted as IRS in column 15 of Table 3.3), all being equal to the communalities. Take T8 for example, the sum of the four product terms (with rounding error), 0.53 + 0.02 + 0.00 +0.01 = 0.55, is equal to the communality of T1. Alternatively, the horizontal additive property can be observed by the sum of Pratt's measures for the 24 tests all being equal to 1.0, the standardized communalities. In other words, Pratt's measures divide the standardized communalities of the observed variables into the non-overlapping parts that are readily attributable to each factor, while allowing the moderate factor correlations to be revealed as shown at the bottom of column 1 to 3 of Table 3.3. In spite of the obliquity, the simple transformation of the pattern and structure coefficients into Pratt's measures restores the horizontal additive property held conventionally only by the orthogonal loadings. 60 Pratt's measures maintained the vertical additive property held by the orthogonal loadings and pattern coefficients. In Table 3.3, we can see that the amount of total variance explained by each of the four factors, denoted as ETPS, is calculated by adding up elements in PS along the 24 tests. Their sum across the four factors is equal to the total variance of 11.03 identical to what we have seen in Chapter Two. This information is shown in the second last row of columns 11 to 14. Accordingly, the proportion of total variance explained by each factor, denoted as %(F) can also be accurately calculated as shown in the last row of columns 11 to 14. The vertical additive property in summing up the standardized total variance can also be seen in the last two rows underneath the Pratt's measure matrix in Table 3.3. Observe that the standardized total variance, 24, is equal to the sum of those explained by each factor individually; the proportion of standardized total variance 100% is also equal to the sum of those explained by each factor individually. Problem Three: Inappropriateness of the Traditional Rules for Being "Meaningful" In Chapter Two, we argued that the traditional rules for a meaningful variable-factor relationship were based on the premises that (1) the horizontal additive property holds, and (2) the absolute magnitude of the relationship is bounded within the range of 0 and 1. As we have explained and demonstrated, Pratt's measures are in accord with these premises. Thus, if desired, it is appropriate to use the traditionally suggested 0.3 or 0.4 rule as practically meaningful for interpreting D. Alternatively, one can use the 1/(2p) criterion suggested for multiple regression by Thomas (1992) as the minimum value for being considered as important. Readers may have noticed and argued that interpreting P vertically using the traditional cut-off rules would identify the same vertical pattern as does D, hence concluding that D is redundant because it provides no new information other than P. However, the discussion in Chapter Two should remind readers that this argument is problematic. That is, although P indeed reserves the vertical additive property, the traditional cut-off rules applied to P are in fact invalid 61 because they are suggested for orthogonal loadings, not to mention that P itself is invalid for horizontal interpretation. The limitations of the oblique coefficients restrict the researchers' interpretational orientation. In contrast, D frees the researchers from the limitations of the oblique coefficients. It allows both vertical and horizontal interpretation to obtain a more complete understanding of the factor model when needed. As discussed in Chapter Two, however, like other regression coefficients or importance measures, Pratt's measures work optimally when the observed variables and the factors follow the simple redundant relationship and display no multicollinearity or suppression effect. Under these complex scenarios, the cut-offs may become uninterpretable for interpreting D because the non-trivial negative Pratt's measures may occur other than simply by chance, and Pratt's measures may exceed the bounds of 0 and 1. Taking T4 for example, the Pratt's measure for Fl is —0.10, which indirectly makes the Pratt's measure for F2 become greater than 1 (1.06). Examining the pattern and structure coefficients between T4 and Fl, one can see that they are of different sign, displaying a negative suppression effect. Such a complex relationship which disobeys the simple redundant relationship should be interpreted separately using a different paradigm, as we will discuss more in Chapter Five. 3.4 A Demonstration of Pratt's Measures in EFA for Categorical Data The second example involves the Likert-type (i.e., rating scale) item responses that are widely seen in social and behavioural science research. One way of factor analyzing categorical data is to base the analysis on the tetrachoric correlation matrix for binary data or polychoric correlation matrix for polytomous data. The polychoric correlation is derived by estimating the linear relationship between two underlying unobserved variables, which are assumed to govern people's observed ordered categorical responses. The estimation of polychoric correlation 62 assumes the underlying unobserved variables are continuous and normally distributed (see Muthen, 1983; 1984). Our example data come from the background questionnaire of the 2003 TIMSS study. The participants were 8,385 U.S. grade-eight students who answered the questions regarding how much time they spent on each of the nine outside of school activities. The question is: "On a normal school day, how much time do you spend before or after school doing each of these things?" One of the activities on the list was, for example, "I watch television and videos". The questions were measured on a 5-point Likert scale and coded as: (1) no time, (2) less than 1 hour, (3) 1-2 hours, (4) more than 2 but less than 4 hours, and (5) 4 or more hours. The purpose of choosing these data is to put Pratt's measures on an empirical test to see whether it also works for a polychoric correlation matrix derived from categorical data. Because SPSS does not produce estimates of polychoric correlations, one has to rely on PRELIS 2 (Joreskog & SOrbom, 1999) to estimate the polychoric correlation matrix and save it as a data file3 . Unfortunately, PRELIS does not provide output for the structure matrix; so one has to resort back to SPSS EFA to obtain the building blocks for Pratt's measure matrix. Using SPSS syntax, one can directly read in the polychoric correlation matrix and conduct an EFA. Syntax for reading the polychoric correlation matrix and running EFA in SPSS is given in Appendix C. An alternative way to obtain S is to use the equation (2.1) S qxp = PqxpRpxp, where P and R are automatically outputted by PRELIS. Unlike the previous example, as far as we are aware, these data was never factor analyzed and published, hence there is no a priori theory to help the researcher decide on the number of 2Tetrachoric and polychoric correlation matrices can also be obtained from the program "FACTOR" developed by Lorenzo-Seva & Ferrando (in press). Free download of this program is available from http://psico.fcep.urv.es/utilitats/factor/. 3This procedure can be done in PRRELIS by choosing Statistics/Factor Analysis/Output Options/Moment Matrix (choose Correlation, click Save to File, and give a .dat file name) 63 factors to extract, the substantive meaning of the factors, or whether the factors are correlated. We used parallel analysis with 100 replications to assist in choosing a preliminary number of factors to extract in addition to the conventional eigenvalue greater than 1.0 rule. Using the above described procedures of factor analyzing a polychoric correlation matrix and the un- weighted least squares (ULS) extraction method in SPSS, the data show a three-factor structure that is supported by both the "eigenvalue greater than one" rule and the parallel analysis (see Table 3.4). Table 3.4 Eigenvalues and Parallel Analysis for 2003 TIMSS Outside School Activities Data Eigen Values EFA PA 1 2.372 1.092 2 1.510 1.064 3 1.138 1.040 4 0.911 1.019 5 0.832 0.999 6 0.667 0.980 7 0.651 0.958 8 0.520 0.940 9 0.400 0.910 Note. PA: parallel analysis. The first three factors were retained because their eigen values are lager than 1 and are greater than those of PA. In this application, ULS was applied to extract the common factors rather than weighted least squares or generalized least squares method. Our rationale for using ULS was based on JOreskog's (2003) contention that ULS and MINRES (minimum residuals, Harman, 1976) are equivalent and gave the same robust solutions. According to JOreskog, ULS can be used even when the correlation matrix is not positive definite, an occasionally occurring scenario when a tetrachoric or polychoric correlation matrix is analyzed. JOreskog also stated "ULS is particularly suited for exploratory factor analysis where only parameter estimates and not standard error estimates and chi-squared values are of interest" (p. 1). The pattern matrix, structure matrix, and the communalities for building the Pratt's measure matrix are shown in Table 3.5. 64 The resultant Pratt's measure matrix in Table 3.5 clearly shows that Pratt's measures also work empirically for categorical data based on a polychoric correlation matrix. This is illustrated by the fact that the standardized communalities for the nine items all add up to 1.0. Furthermore, the interpretability of the three-factor solution is tremendously enhanced. This is illustrated by the remarkably distinct proportions of communality explained by each of the three factors. Table 3.5 Pattern, Structure, PatternxStructure, & Pratt's Measure Matrices, and Commonalities for 2003 TIMSS Outside School Activities Data P EP2 S ES2 PS D 1FD h2 Item^F1^F2^F3 F1^F2 F3 F1 F2 F3 FPS^F1 F2 F3 1. Watching TV & video^.52 .06 -.07 .28 .55 .26 -.05 .37 .29 .02 .00 .30^.94 .05 .01 1.00 .30 2. Playing computer game^.79 -.10 .10 .65 .75 .25 .08 .63 .59 -.03 .01 .58^1.03 -.04 .01 1.00 .58 3. Playing/talking w/ friends^.32^.40 -.18 .30 .49 .49 -.07 .48 .16 .20 .01 .37^.43 .54 .03 1.00 .37 4. Doing jobs at home^-.06 .53 .31 .39 .16 .59 .46 .58 -.01 .31 .14 .45^-.02 .70 .32 1.00 .45 5. Working at a paid job^-.02 .46 .04 .21 .17 .46 .16 .27 .00 .21 .01 .21^-.01 .99 .03 1.00 .21 6. Playing sports^.02 .44 -.08 .20 .20 .42 .03 .22 .00 .19 .00 .19^.03 .99 -.02 1.00 .19 7.Reading a book^.09 -.10 .64 .43 .05 .11 .61 .39 .00 -.01 .39 .39^.01 -.03 1.02 1.00 .39 8.Using internet^.60 .03 .05 .37 .61 .29 .06 .47 .37 .01 .00 .38^.97 .02 .01 1.00 .38 9. Doing homework^-.04 .09 .44 .20 .00 .19 .46 .25 .00 .02 .20 .22^.00 .08 .92 1.00 .22 F2 .41 ETPS 1.41 .91 .77 3.09 ETD^3.37 3.29 2.34 9.00 F3 .01^.27 15.61 10.16 8.55 34.31^37.45 36.58 25.97 100.00 Note. h2:communality; P: Pattern matrix; S: Structure matrix; PS: a matrix in which the elements are the products of a given pattern coefficient and its corresponding structure coefficient; D: Pratt's measure matrix; EFPS is the sum of the elements in PS for a given test across the four factors; ETPS is the sum of the elements in PS for a given factor across the 24 tests; E FD is the sum of the elements in D for a given test across the four factors; E TD is the sum of elements in D for a given factor across the 24 tests; %(F) denotes the percentage of total variance of the 24 tests explained by a given factor. Note. The highest Pratt's measures are highlighted in bold. Pratt's measures that are not the most important measures and yet are not considered unimportant are underlined. Because the meaning of the factors is unknown, vertical interpretation is most suitable for the present stage. The vertical interpretation is warranted by the vertical additive property of Pratt's measures. When examining the Pratt's measures along the nine items, Fl contributes to almost all the common variance of "Watching TV & Videos" (94%) and "Using internet" (97%), and all the common variance of "Playing computer games"(100%) - three activities that all involve some form of electronic-related activities. Similarly, F2 is the major contributor to "Playing or talking with friends" (54%) and "Do jobs at home" (70%), and is almost the sole 65 contributor for the common variances of "Playing sports" (99%) and "Working at a paid job" (99%). By identifying these four items that share F2 as the most important contributor to their communalities, F2 can be interpreted as a dimension of involvement in social interaction and support activities. In the same sense, F3 can be interpreted as involvement in "reading or studying" because it is the sole contributor to the common variance of the items "Reading a book for enjoyment" (100%), and a major contributor to the common variance of "Doing homework" (92%). Once the factor interpretation is realized, one can then interpret the directional relationships of the factors on the observed variables using the horizontal additive property. Another useful way that Pratt's measures can enhance the horizontal interpretation is to identify unimportant contributors using the dp< 1/(2p) rule. When interpreted horizontally, factors with Pratt's measures less than 0.167 (P= 3) can be regarded as an unimportant factor for the variation of a given test. For example, even though F2 explains 8% of the common variance in item "doing homework", its contribution could be considered as unimportant and ignored. Identification of the unimportant factors helps to eliminate the unnecessary complexities when interpreting the factor solutions. It is interesting to observe that the dp< 1/(2p) criterion clearly separates the most important factors from the "unimportant". For items with no factorial complexity (no cross- loading), the 1/(2p) criterion distinguishes the three factors into either "most important" or "unimportant" with no middle ground interpretation (i.e., in-between most important and unimportant). For example, for item 9 "doing homework", factors that are "not" most important (F1 and F2) were all identified as unimportant. The only two exceptions were items 3 and 4, which clearly displayed factorial complexity, as shown by their pattern coefficients, indicating that more than one factor was an important cause for the observed variations. For instance, although Fl is not the most important contributor for item 3 "playing or talking with friends", it 66 still accounts for 43% of the communality, which is greater than the unimportant criterion, and should be recognized when interpreting horizontally. Another valuable piece of information resulting from applying Pratt's measures is that the associations among the three factors can be revealed. This correlation matrix is shown at the bottom of Table 3.5. One can see that although Fl (electronic activities) and F3 (reading and studying) are nearly orthogonal, F 1 (electronic activities) and F2 (social interaction) are estimated to correlate at .41; F2 (social interaction) and F3 (reading & studying) are estimated to correlate at .27. Information on factor correlation would have been omitted if an orthogonal rotation had been applied for mathematical simplicity. Closing Remarks for Chapter Three The purpose of this chapter was to introduce the use of the Pratt's measures method in factor analysis and show how it could resolve three interpretational difficulties arising from factor obliquity. First, we showed that it integrates the information in both the pattern and structure coefficients, so there is no need to choose which to trust if the individual interpretation leads to inconsistent conclusions. Second, it restores the horizontal and vertical addition properties that are conventionally warranted only for orthogonal factor models. Third, it resolves the major problems of the rules for "meaningful" cut-offs suggested for orthogonal models. In essence, the method of Pratt's measures also resolves the dilemma of choosing between the advantages of theoretical flexibility facilitated by an oblique model and the advantage of mathematical simplicity facilitated by an orthogonal model. Such historical debate is now dispensable because advantages of both rotational methods can be achieved by the Pratt's measures method. Readers should be warned that, like multiple regression, the interpretation of Pratt's importance measures in EFA is model dependent. Namely, the importance of a factor is defined relative to the other factors for a particular factor solution. Because the importance of a factor is 67 defined relative to other factors, it is not appropriate to compare a factor's importance across various factor solutions involving a different number of factors. Hence, dimensional specification is crucial for Pratt's measures to operate meaningfully in EFA. If the Pratt's measure matrix is not interpretable, this may suggest that alternative models with a different number of factors as well as rotation method should be explored. Also, because Pratt's measures partition the explained variance into additive parts, researchers should be aware of how much of the explained variance they begin with in their analyses. That is, if the communalities of the observed response variables are low, application of Pratt's measures is of little value because one cannot make sense of something that has little to explain, a problem that neither rotational methods nor Pratt's measures can address. In this case, attention should be paid to the selection of the observed variables before any interpretation of the factor solution. 68 References Azen, R., & Budescu, D. V. (2003). The dominance analysis approach for comparing predictors in multiple regression. Psychological Methods, 8, 129-148. Bring, J. (1996). A geometric approach to compare variables in a regression model. The American Statistician, 50, 57-62. Conger, A. J. (1974). A revised definition for suppressor variables: A guide to their identification and interpretation. Educational and Psychological Measurement, 34, 35-46. Gorsuch, R. L. (1983). Factor analysis (2nd ed.). Hillsdale, NJ: Erlbaum. Green, P. E., Carroll, J. D., & DeSarbo, W. S. (1978). A new measure of predictor variable importance in multiple regression. Journal of Marketing Research, 15, 356-360. Harman, H. H. (1976). Modern factor analysis (3rd ed.). Chicago: University of Chicago Press. Holzinger, K. J., & Swineford, F. (1939). A study in factor analysis: The stability of a bi-factor solution. Supplementary Educational Monographs. Chicago: University of Chicago, JOreskog, K. G. (2003). Factor analysis by MINRES. (Scientific Software International Technical Documentation). Retrieved March, 22, 2006, from http://www.ssicentral.com/lisrel/resources.html. JOreskog, K. G., & SOrbom, D. (1999). LISREL 8 user's reference guide. Chicago: Scientific Software International. Kruskall, W. (1987). Relative importance by averaging over orderings. The American Statistician, 41, 6-10. Lancaster, B. P. (1999, January). Defining and interpreting suppressor effects: Advantages and limitations. Paper presented at the annual meeting of Southwest Educational Research Association, San Antonio, Texas. Lorenzo-Seva, U., & Ferrando, P. J. (in press). FACTOR: a computer program to fit exploratory factor analysis model. Behavior Research Methods. Pratt, J. W. (1987). Dividing the indivisible: Using simple symmetry to partition variance explained. In T. Pukilla & S. Duntaneu (Eds.), Proceedings of Second Tampere Conference in Statistics (pp. 245-260). University of Tampere, Finland. Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift's electric factor analysis machine. Understanding Statistics, 2, 13-43. Russell, D. W. (2002). In search of underlying dimensions: The use (and abuse) of factor analysis in personality and social psychology bulletin. Personality and Social Psychology Bulletin, 28, 12, 1629-46. 69 Thomas, D. R. (1992). Interpreting discriminant functions: A data analytic approach. Multivariate Behavioral Research, 27, 335-362. Thomas, D. R., & Zumbo, B. D. (1996). Using a measure of variable importance to investigate the standardization of discriminant coefficients. Journal of Educational & Behavioral Statistics, 21,110-130. Thomas, D. R., Hughes, E., & Zumbo, B. D. (1998). On variable importance in linear regression. Social Indicators Research, 45, 253-275. Thomas, D. R., Zhu, P. C., Zumbo, B. D., & Dutta, S. (2006, June). Variable importance in logistic regression based on partitioning an R2 measure. Paper presented at the 2006 Annual Meeting of Administrative Sciences Association of Canada (ASAC), Banff, Alberta. Thomas, D. R., Zhu, P. C., Zumbo, B. D., & Dutta, S. (in press). On measuring the relative importance of explanatory variables in a logistic regression. Journal of Modern Applied Statistical Methods. Wu, A. D., Zumbo, B. D., & Thomas, D. R. (2006, April). Variable and factor ordering in factor analyses: Using Pratt's importance measures to help interpret exploratory factor analysis solutions for oblique rotation. Paper presented at the Annual Meeting of the American Educational Research Association (AERA), San Francisco, CA. 70 Chapter Four: Demonstration of Pratt's Measures in Confirmatory Factor Analysis Through the use of Pratt's measures in CFA, this chapter serves two purposes. The material in Section 4.1 and 4.2 constitutes a follow-up study to Graham, Guthrie, and Thompson (2003) 1 . The purpose is to warn researchers that the structure coefficient of a confirmatory factor analysis can be entirely spurious due to the zero constraint on its corresponding pattern coefficient and the factor obliquity. Interpreting such a structure coefficient as advocated by Graham et al. could be misleading and problematic. Judging by the CFA fit indices, the second purpose is to compare the fit of the EFA model identified by the Pratt's importance measures > 1/(2p) criterion to those of models identified by the cut-offs commonly applied to the pattern and structure coefficients as well as the orthogonal loadings. In Graham et al.'s (2003) article, the authors used two hypothetical data sets to deliver a central message: interpreting only the pattern coefficients and ignoring the information in the structure coefficients could lead to problematic interpretation of an oblique CFA. Their first data set represents a CFA involving no factorial complexity and the second data set represents a CFA involving one factorial complexity (i.e., one item with cross-loading). Graham et al. argued that to properly interpret a CFA, both the pattern and structure coefficients should be interpreted by means of juxtaposing. In addition to the reasons we listed in Chapter Two, we believe that the preference for interpreting only the pattern coefficients in CFA practice is due to two other reasons. First, the parameter constraint of CFA, which is the key feature that distinguishes a CFA from an EFA, is actually placed on the pattern coefficients borne on the fundamental theory of factor analysis in 'A version of this chapter will be submitted for publication. Wu, A. D., Zumbo, B. D., & Thomas, R. D. Pratt's importance measures in confirmatory factor analysis. 71 equation (1.1) rather than the structure coefficients. Second, the default output of most CFA statistical packages provides only the pattern coefficients. To fulfill our first purpose, we re-analyzed the two datasets using LISREL by Maximum Likelihood estimation method. We demonstrate that the structure coefficients can be entirely or partly spurious; hence, adding the structure coefficients to the interpretation of an oblique CFA as recommended by Graham et al. (2003) is still insufficient for the reasons we have explained and demonstrated in Chapter Two. Next, the Pratt's measures method is applied to resolve the interpretational difficulties of an oblique CFA by integrating information in both the pattern and structure coefficient. 4.1 Pratt's Measures in CFA with no Factorial Complexities For the first data set in Graham et al. (2003), two factors that correlate at 0.68 were hypothesized to be underlying six observed variables. Table 4.1 shows the pattern and structure matrices reported by Graham et al. The second and third columns show that Fl has partial effects only on the first three observed variables, F2 has partial effects only on the last three observed variables, and all the other pattern effects were fixed at zero indicating no factorial complexities. This example is an ideal manifestation of the simple structure and is the template specification by many CFA users. Graham et al. argued that constraining the pattern coefficients to zeros does not automatically constrain the structure coefficients to zeros if the factors are correlated. Graham et al.'s (2003) point is illustrated by the pattern and structure coefficients reported in Table 4.1 - despite the zero constraint on the pattern coefficients, the corresponding structure coefficients still yield substantial values as highlighted in bold face. For example, although the pattern coefficient of F2 on variable A is constrained to be zero, its corresponding structure coefficient of 0.58, is far greater than the cut-off of 0.3 or 0.4 suggested in the 72 literature. Using this example, Graham et al. raised the problem of missing important information if the structure coefficient is not interpreted. Table 4.1 Factor Solutions for Case One: with No Factorial Complexities P S L PS SQRT(PS) F1 F2 F1 F2 F1 F2 F1 F2 F1 F2 F1 F2 A .849(g) .000(h) .849(i) .580(j) .836 .000 .721 .000 .849 .000 1.000 .000 B .726 .000 .726 .495 .721 .000 .527 .000 .726 .000 1.000 .000 C .817 .000 .817 .557 .836 .000 .667 .000 .817 .000 1.000 .000 D .000 .875 .597 .875 .000 .855 .000 .766 .000 .875 .000 1.000 E .000 .774 .528 .774 .000 .777 .000 .599 .000 .774 .000 1.000 F .000 .808 .552 .808 .000 .794 .000 .653 .000 .808 .000 1.000 R F1 1.000(k) .680(1) F2 .680(m) 1.000(n) Note. P: pattern matrix; S: structure matrix; L: loading matrix; PS: a matrix of which the elements are the products of a given pattern coefficient and its corresponding structure coefficient; Sqrt(PS): square root of PS; D: Pratt measure matrix; R: factor correlation matrix Graham et al.'s (2003) contention was only partially correct. They are correct in pointing out that ignoring the structure coefficients can miss important information in the bi-directional relationships. However, this statement is correct only if the bi-directional relationship is indeed true. Our forthcoming contention shows that a non-zero structure coefficient is "entirely" spurious when accompanied by a zero pattern coefficient. When the pattern coefficient is constrained to zero, the correlation between A and variable F2 in Table 4.1 is "entirely" due to the correlation between Fl and F2. That is, the correlation between A and F2 is due to A correlating with Fl, which in turn correlates with F2. The substantial zero-order bivariate correlation between A and F2 would completely disappear (rather than diminish) once the correlation between Fl and F2 is removed. Namely, the substantial correlation between A and F2, as indicated by the structure coefficient, is "entirely" spurious. Interpreting such a spurious 73 relationship as suggested by Graham et al. is as problematic as not interpreting it at all, if not more! The spuriously substantial structure coefficient due to the high factor correlation can be shown through equation (2.1) S q,,p= PqxpRp,,p in Chapter Two, which says that the structure matrix is equal to the pattern matrix post-multiplied by the factor correlation matrix. Using this equation, the structure coefficient 0.58 between A and F2, denoted as (j) in Table 4.1 is given as the values in cells (g), (1), (h), and (n) such that, (j) = 0.58 = (g) x(i) + (h) x (n) = (pattern Fl on A) x (correlation Fl and F2) + (pattern F2 on A) x (correlation F2 and F2) = 0.849 x 0.68 + x 1 Because the second product term is equal to zero, the structure coefficient (j) between F2 and A is completely attributable to the first product term: the partial effect of Fl on A (0.849) times the correlation between Fl and F2 (0.68), which has nothing to do with any relationships between F2 and A. By construing the calculation of the structure coefficient (j), it clearly affirms that, when the corresponding pattern coefficient is constrained to zero, the moderately high correlation between F2 and A, 0.58, is entirely spurious and is simply a result of the correlation between Fl and F2. The traditional way of investigating the unique correlation that is not inflated by the factor correlation is to obtain loadings using orthogonal rotation assuming no correlation between the factors, even if the factors are theoretically or empirically shown to be otherwise. Table 4.1 also shows the orthogonal loadings, L. As explained in Chapter Two, one can see that the loadings identically represent P and S when the factor rotation is constrained to be orthogonal. The elements represent both the unique partial effect and unique bivariate correlation because the factors contain no overlapping information to be removed. 74 Comparing the pattern coefficients to the loadings in Table 4.1, one can see that the pattern coefficients remain very similar. This is because the pattern coefficient is the unique causal effect that has accounted for the overlapping contribution of F1 and F2. However, there is a troubling difference in the structure coefficients and the orthogonal loadings. For the orthogonal solution, variables with zero pattern coefficients also yield zero loadings showing that there is no unique bi-directional relationship between F2 and variable A when Fl and F2 are uncorrelated. Under factor orthogonality, this zero bi-directional relationship between F2 and A can be construed using the same formula for calculating the structure coefficient, = (pattern F1 on A) x (correlation F1 and F2) + (pattern F2 on A) x (correlation F2 and F2) =0.836 x 0 + 0 x 1 Compared to the oblique case, not only the second product term but also the first product term drop out of the calculation and yield a zero structure coefficient (i.e., loading). The first product term that produces the spurious bi-directional relationship in the oblique case is no longer in effect because the pattern coefficient (i.e., loading) of Fl on variable A (0.836) is multiplied by a zero correlation between Fl and F2. Although resorting to an orthogonal solution to help detect a spurious correlation and reveal a unique correlation between an observed variable and a factor is technically straightforward and convenient, this approach contradicts the CFA rationale of testing a model that is a priori hypothesized to be oblique. Also, using this method may produce biased pattern coefficient estimates due to the orthogonal constraint when, in fact, the factors are correlated. This bias can be observed by the small inconsistency between the pattern coefficients and the loadings in Table 4.1. As demonstrated in Chapter Three, Pratt's measures can resolve the interpretational complexities resulting from factor obliquity. Without having to constrain unjustified orthogonality, Pratt's measures can additively attribute the unique contribution of each factor to 75 the communality while still allowing the factor correlation to be freely estimated and tested. The unique bivariate relationship can be investigated without the orthogonal constraint. Columns under the heading of PS in Table 4.1, which are the products of a pattern coefficients multiplied by its corresponding structure coefficients, indicate the amount of variance explained by each factor. The column denoted as SQRT(PS), the square roots of PS, is analogous to the unique correlation between a given factor and a variable by removing the overlapping relationship due to factor correlation. By examining the values of SQRT(PS), one can see that the substantial correlation coefficients with zero pattern constraints drop to zero while the overlapping relationships among the factors is removed without having to impose an orthogonal constraint. Columns under the heading of D in Table 4.1 list the Pratt's measures, which indicate the contribution of each factor to the standardized communality. By examining Table 4.1, it is clear that the large values of structure coefficients reported by Graham et al. (2003) were totally due to the factor obliquity. This can be seen by F1's zero contribution to the communality of the last three variables with zero pattern coefficient constraint, and F2's zero contribution to the communality of the first three variables with zero pattern coefficient constraint. 4.2 Pratt's Measures in CFA with Factorial Complexity For the second data set in Graham et al. (2003), two factors that correlate at 0.71 were hypothesized to be under six observed variables. Table 4.2 shows the pattern and structure matrices reported by Graham et al. (2003). As in the last example, Fl has an effect on only the first three variables. However, F2, in addition to the last three variables, also has an effect on the third variable C. The other effects were all fixed at 0. This model displays one factorial complexity in variable C. 76 Table 4.2 Factor Solutions for Case Two: with One Factorial Complexity P S L PS SQRT(PS) F1 F2 F1 F2 F1 F2 F1 F2 F1 F2 F1 F2 A .834 .000 .834 .593 .836 .000 .696 .000 .834 .000 1.000 .000 B .722 .000 .722 .514 .720 .000 .521 .000 .722 .000 1.000 .000 C .930(g) -.132(h) .836(i) .529(j) .802 .111 .777 -.070 .882 --- 1.099 -.099 D .000 .875 .622 .875 .000 .855 .000 .766 .000 .875 .000 1.000 E .000 .774 .550 .774 .000 .777 .000 .599 .000 .774 .000 1.000 F .000 .809 .575 .809 .000 .795 .000 .654 .000 .809 .000 1.000 R F1 1.000(k)^.710(1) F2 .710(m) 1.000(n) Note. P: pattern matrix; S: structure matrix; L: loading matrix; PS: a matrix of which the elements are the products of a given pattern coefficient and its corresponding structure coefficients; Sqrt(PS): square root of PS; D: Pratt measure matrix; R: factor correlation matrix As in the first example, the structure coefficients are substantial for factors that are constrained to have zero pattern coefficients. For example, the structure coefficient between F2 and variable A is 0.593 despite the pattern effect being constrained to be zero. As shown in the first example, this type of spurious correlation is entirely due to the high correlation between Fl and F2. What makes this example different from the first is that, for variable C, neither the pattern coefficient of Fl nor of F2 is constrained to zero. Namely, both factors have a unique partial effect on variable C. Also, both factors yield a noticeable correlation with variable C indicated by the structure coefficients of 0.836 and 0.529. The second set of data differs from the first in that variable C displays a suppression relationship. For the first data set, all the variables follow a typical simple redundant relationship where the zero-order correlations without removing the overlap relationship are expected to be equal to (in the orthogonal case) or greater than (in the orthogonal case) the corresponding partial regression coefficients. However, in the second example, the pattern coefficient of Fl on variable C (0.930) is greater than its 77 corresponding structure coefficient (0.836), a circumstance referred to as a classic suppression effect, leading to the pattern coefficient of F2 becoming negative (-0.132), while its corresponding structure coefficient remains positive (0.529), a circumstance referred to as negative suppression effect 2 (Cohen, Cohen, West, Aiken, 2003; Conger, 1974; Lancaster, 1999). In this scenario, these large structure coefficients may not be simply due to the factor correlation as in the first case, and hence should not be automatically considered as entirely spurious. The true unique correlation may be complex and difficult to uncover. Under the suppression relationships, the derivation of the structure coefficients between variable C and Fl and C and F2 can be construed by using the same method for the first data set. The structure coefficient between F 1 and C denoted as (i) in Table 4.2 is given by (i) = 0.836 = (g) x (k) + (h) x (m) = (pattern F1 on C) x (correlation F1 and Fl) + (pattern F2 on C) x (correlation F1 and F2) = 0.930 x 1 +(-0.132) x0.71 The structure coefficient of Fl, (i), is equal to the first product term, pattern coefficient of Fl (0.93x1), adjusted additively by the second product term, the pattern coefficient of F2 (- 0.132) multiplied by the correlation between Fl and F2 (0.71). The second product term (- 0.132x0.71) that adjusts downward the F1's structure coefficient reflects the joint influence of the unique effect of F2 on C and the correlation between Fl and F2, which has nothing to do with Fl directly. One can also observe that the structure coefficient is adjusted downward to be less than its corresponding pattern coefficients showing a classic suppression effect. Similarly, the structure coefficient (j) between F2 and C in Table 4.2 is given by 2In the language of multiple regression, a suppression effect is said to exist if the partial regression coefficient is greater than its corresponding zero-order bivariate correlation. Such a scenario is referred to as "classic suppression effect" by Conger (1974) as shown between Fl and variable C in Table 4.5. In Lancaster (1999), he then distinguished the "classic suppression effect" from the other types of suppression effects such as the "negative suppression effect" as we have pointed out in Chapter Two as well as shown between F2 and variable C in Table 4.5. For a review of definition and types of suppression effect in regression, please see Conger (1974) and Lancaster (1999). 78 (j) = 0.529 = (g) x (I) + (h) x (n) = (pattern F1 on C) x (correlation F1 and F2) + (pattern F2 on C) x (correlation F2 and F2) = 0.930 x 0.71 + (-0.132) x 1 The structure coefficient between F2 and (j) is equal to the second product term, pattern coefficient of F2 (-0.312x1), adjusted additively by the first product term, the pattern coefficient of Fl on C (0.930) multiplied by the correlation between Fl and F2 (0.71). The first product term (0.930x0.71) that upwardly adjusts the F2's structure coefficient reflects the joint influence of the unique effect of Fl on C and the correlation between Fl and F2, which has nothing to do with F2 directly. The structure coefficient is upward adjusted to be of opposite sign to the pattern coefficient showing a negative suppression effect. Again, Table 4.2 also shows the traditional method of using orthogonal rotation to investigate the non-overlapping bivariate relationship. Note that after the orthogonal constraint, the loadings identically represent P and S, and the suppression effect disappears because there is no factor correlation to complicate the pattern or structure relationship. For variables displaying no factorial complexity (i.e., variables except for variable C), the structure coefficients all drop to zero for variables with zero pattern constraint. This shows that the non-zero structure coefficients yielded by the oblique solution are entirely due to the correlation between Fl and F2, and are entirely spurious. For variable C that displays factorial complexity, the structure coefficients (i.e., loadings) for Fl and F2 produced by the orthogonal solution were actually 0.802 and 0.111 respectively compared to those by the oblique solution of 0.836 and 0.529. For F2, the structure coefficient is inflated by 0.418 (i.e., 0.529-0.111) as a result of the correlation between Fl and F2. Table 4.2 shows Pratt's measures and its associated indices. The PS matrix shows the proportion of communality explained by each of the factors. The square root of PS matrix under 79 the heading of Sqrt(PS), in essence, is analogous to the unique correlation where the overlapping relationship is removed. Note that the square root of PS for variable C and F2 cannot be calculated because the pattern coefficient and structure coefficient are of different sign and yield a negative product term due to the negative suppression effect. The last two columns of Table 4.2 display the Pratt's measures indicating how much of the standardized communality is attributable to each factor. Note F2 yields a noticeable negative importance measure for variable C (-0.099) showing that there is a potential suppression effect as we discussed in Chapter Three. To conclude our argument and findings in Section 4.1 and 4.2, for variables that display no factorial complexity, Pratt's measures show that the substantial structure coefficients reported by Graham et al., (2003), of which their corresponding pattern effects are constrained to zero, were entirely due to the high factor correlation. Unlike the traditional orthogonal solution, the unique bi-directional relationships can be obtained by the square root of the product term of PS without having to sacrifice the factor correlation information if the factors follow a simple redundant relationship. For variables that display factorial complexity but no suppression relationship, which are not discussed by the two examples in Graham et al. (2003), the structure coefficient will decrease once the factor correlation is removed. The structure coefficient should be interpreted only if the inflated correlation is removed by orthogonal constraint or by taking the square root of PS if obliquity is preferred. For variables that display suppression effect as displayed by variable C in the second data, interpreting the structure coefficient can be confusing because they may turn out to be less than or of different sign to their corresponding pattern coefficients. Although reporting the structure coefficient may reveal the suppression relationship, which cannot be detected by interpreting the pattern coefficient alone, it by no means reflects the true unique bi-directional relationship. Such complex suppression relationships require a different interpretation paradigm and should not be interpreted blindly using the traditional paradigm assuming a simple 80 redundancy relationship. The legitimate interpretation should look into the suppression effect in order to properly understand the complex relationship. The suppression effect issue will be revisited in the closing chapter. It is also worth noting that Graham et al. (2003) deliberately created two heuristic datasets with two dimensions that are highly correlated (R= 0.69 and 0.71). Their original intention of creating such data was to highlight that variables with zero pattern constraints may yield notable structure coefficients due to factor correlations that are often ignored by the practitioners. However, in a real data analysis context, a unidimensional solution may suffice with such highly correlated bi-factors. A unidimensional solution may fit the data satisfactorily and be preferred for its parsimony. More importantly, for the CFA mechanism to work with legitimacy and warrant, the highly overlapping and complex relationships that show suppression effects, as created by Graham et al. (2003) should have been explored prior to CFA. Namely, for CFA to work optimally, the researcher should have a clear understanding of the data structure based either on the prior empirical examination of EFA results or the researcher's substantive theory, rather than using CFA as an exploratory tool to uncover the highly overlapping and complex factor structure. 4.3 Comparing the Fit of the Pratt's Measures Model: Additional CFA Case Studies In this section, two new data sets were analyzed using CFA. The purpose is to compare the fit of the EFA model identified by the Pratt's importance measures > 1/(2p) criterion to those of models identified by the cut-offs commonly applied to the pattern and structure coefficients as well as the orthogonal loadings. To be specific, the purpose of this section is to investigate how well the model suggested by the Pratt's measures > 1/(2p) rule fits the given data, as indicated by 81 the CFA fit indices, relative to the other six models suggested by the orthogonal loadings as well as the oblique pattern and structure coefficients using the traditional cut-offs of 0.3 and 0.4. It is important to clarify that our intention is not to undermine the legitimacy of substantive theories in guiding the CFA specification. Rather, it is to investigate, when there are no known theories, which EFA model best summarizes the given data as tested by CFA. The first data set consists of 6, 297 participants' responses to 26 items measuring the six theoretical dimensions of psychological well-being. These data was used in Chapter Two to demonstrate the problems of traditional cut-off rules for the oblique coefficients. However, the Pratt's measures method has never been applied to this data and reported. The second data set consists of 7,167 college students' responses to 10 items measuring the two dimensions of positive affect reported by the original authors (Watson, Clark, & Tellegen, 1988; see Appendix D for description of the scale items). These two examples were chosen because of their large sample sizes so that we can randomly split the data into two equal halves, with a sufficient number in each half. The first half of data were analyzed by EFA and solutions were chosen according to the seven criteria described below. Based on the results of EFA, the seven models then were specified and tested by CFA using the second half of the data. CFA Model Specification The seven model specifications are as follows: the first two models were chosen using cut-offs of 0.3 and 0.4 for the loadings of the orthogonal EFA model. Accordingly in CFA, the loadings that were equal to or greater than 0.3 (Model 1) and 0.4 (Model 2) were free to be estimated. All the other loadings and the covariances among the factors were fixed to be zeros. The next two models were chosen using the same cut-offs for the pattern coefficients of the oblique EFA model. Accordingly in CFA, the pattern coefficients that were equal to or greater than 0.3 (Model 3) and 0.4 (Model 4) were free to be estimated, as were the covariances among the factors. All the other parameters were fixed to be zeros. The fourth and fifth models were 82 chosen using the same cut-offs but for the structure coefficients of the oblique EFA model. Accordingly in CFA, the pattern coefficients 3 with corresponding structure coefficients that were equal to or greater than 0.3 (Model 5) and 0.4 (Model 6) were freed to be estimated, as were the covariances among the factors. All the other parameters were fixed to be zeros. The last model was chosen based on the unimportant criterion (i.e., d< 1/(2p)) suggested by Thomas (1992) for Pratt's measures. Accordingly in CFA, pattern coefficients with corresponding Pratt's measures that are not unimportant (i.e., d> 1/(2p)) were free to be estimated, so were the covariances among the factors. All the other parameters were fixed to be zeros. CFA Results Comparison The fit of the seven models were reported and compared in Table 4.3 using the following indices: p-value from the Chi-square test, RMSEA and its confidence interval (CI for RMSEA), SRMR, CFI as well as two information fit indices, AIC and CAIC 4 . Although the optimal cut- offs for good fit depend on a variety of factors such as model complexity (Browne & Cudeck, 1992; Hu & Bentller, 1999; Marsh, Hau, & Wen, 2004), in broad strokes, RMSEA< 0.08, SRMR < 0.05, and CFI> 0.90 are considered as good fit. On the other hand, AIC and CAIC are data dependent, no single satisfactory cut-offs can be suggested. The judging criterion is that a model yielding a smaller AIC and CAIC is indicative of being relatively better fitting. In Table 4.3, the best-fit indices among the seven models are highlighted in bold face. Due to the large sample size, the Chi-square tests for all seven models as shown in column four of 4.3, as anticipated, were all significant indicating a poor data-model fit, hence providing no useful information for cross-model comparison. This finding is not uncommon in 3For most SEM software packages, only the pattern coefficients can be specified in CFA, not the structure coefficients. 4One disadvantage with chi-square in comparing model fitting is that it always decreases when more parameters added. Therefore there is a possibility to choose a model with more parameters that are really unnecessary. A number of measures of fit have been proposed that take model parsimony into account. These measures resolve this problem by constructing a measure which ideally first decreases as parameters are added and then has a turning point such that it takes its smallest value for the "best" model and then increases when further parameters are added. The AIC & CAIC (Akaike's Information Criterion are measures of this type. They can also be used to test non-nested models provided your sample is sufficiently large. 83 the CFA literature. When sample size is large, a Chi-square test may easily reject a null hypothesis because of the high statistical power. Nonetheless, for the psychological well-being data, not only the Pratt's measures > l/(2p) model yield RMSEA, SRMR, and CFI indices that satisfied the good fit criteria suggested in the literature, but, most importantly, this model also yielded the best fit indices compared to the other six models, a finding that is further confirmed by the other two information indices of AIC and CAIC. The same results were found for the positive affect data. Six out of seven (except for the CAIC) indices suggested that the Pratt's measures > 1/(2p) rule identified the model that best fit the data. Table 4.3 Comparisons of CFA Fit Indices for Models Identified by Different EFA Coefficients and Cut-offs Model df x2 p-value RMSEA CI for RMSEA CFI SRMR AIC CAIC Psychological Welling-being Data 1. L> 0.3 421 8403.910 .000 .082 .080 .083 .790 .210 9459.110 9988.680 2. L> 0.4 296 10054.780 .000 .110 .110 .110 .720 .260 11935.440 12323.790 3. P> 0.3 391 5608.540 .000 .069 .068 .071 .860 .054 6503.030 7027.540 4. P> 0.4 287 4400.020 .000 .071 .070 .073 .870 .052 5052.840 5504.740 5. S> 0.3 341 4648.910 .000 .068 .066 .069 .890 .039 5603.780 6698.210 6. S> 0.4 355 3965.680 .000 .058 .057 .060 .900 .040 4422.130 5198.820 7. D? 1/(2p) 392 3746.220 .000 .054 .052 .055 .910 .037 4216.150 4950.470 Positive Affect Data 1. L> 0.3 31 760.950 .000 .080 .075 .085 .910 .110 775.920 947.780 2. L> 0.4 33 1273.410 .000 .097 .092 .100 .850 .150 1159.520 1317.050 3. P> 0.3 32 445.400 .000 .062 .057 .067 .950 .036 510.120 674.810 4. P> 0.4 19 277.500 .000 .064 .057 .070 .960 .035 323.780 445.510 5. S>0.3 a a a a a a a a a 6. S> 0.4 27 236.660 .000 .047 .042 .053 .970 .023 294.580 495.070 7. D? l/(2p) 30 237.780 .000 .045 .039 .050 .970 .023 289.310 468.320 Note. The EFA results showed that the structure coefficients were all greater than 0.3 for both factors. Accordingly, to specify the S 0.3 model in CFA, all the model parameters were freely estimated (including the covariances among the factors); hence, the model fit indices could not be computed. Integrating the findings from multiple fit indices from the two data sets suggests that an EFA model identified by the Pratt's measures > 1/(2p) criterion best fit the given data. It is tentatively concluded that a CFA is more likely to indicate a good model fit if the model is specified using the Pratt's measures > 1/(2p) criterion in EFA. Nonetheless, as clarified at the 84 outset, this finding does not suggest researchers should ignore the role of their substantive theories in specifying a CFA factor model. Instead, it suggests that when there are no well- known theories for guidance, and researchers cannot help but rely on empirical EFA to assist in the model specification, Pratt's measures > 1/(2p) rule may provide the best fitting model as tested by CFA. Closing Remarks for Chapter Four In this chapter, in accordance with the current literature, we agree that the information in the structure coefficient should not be ignored when interpreting a CFA model. Nonetheless, careful attention should be paid to the fact that the zero-order correlations between the observed variables and the factors are inflated by the factor correlations. By construing the calculation of the structure coefficient, we affirm that the bi-directional relationship indicated by the structure coefficient can be deceptive due to the factor obliquity, to the extent that the entire relationship is spurious when the corresponding pattern coefficient is constrained to be zero, a common practice in CFA. Furthermore, when a CFA model is hypothesized to be oblique, juxtaposing both the pattern and structure coefficients does not resolve the inherent problems in both coefficients, nor does the method of resorting to orthogonal loadings. A more mathematically justified and sufficient algorithm is to incorporate the information in both the pattern and structure coefficients by applying the Pratt's measures method. This contention is supported by the results of CFA investigations on two empirical data sets, indicating that the model identified by the Pratt's measures > 1/(2p) rule yields the best fit to the data when compared to the models that use only the information in the pattern, structure or orthogonal coefficients. Our thesis supported by empirical evidence refute the currently recommended and practiced methods for understanding an oblique factor model — interpreting P or S or juxtaposing both without integrating the information. 85 References Browne, M. W., & Cudeck, R. (1992). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long (Eds.), Testing structural equation models (pp. 136-162). Newbury Park, CA: Sage. Cohen, J. P., Cohen, S. G., West, L. S., & Aiken, L. S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates. Conger, A. J. (1974). A revised definition for suppressor variables: A guide to their identification and interpretation. Educational and Psychological Measurement, 34,35-46. Graham, J. M., Guthrie, A. C., & Thompson, B. (2003). Consequences of not interpreting structure coefficients in published CFA research: A reminder. Structural Equation Modeling, 10, 142-152. Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6, 1-55. Lancaster, B. P. (1999, January). Defining and interpreting suppressor effects: Advantages and limitations. Paper presented at the annual meeting of Southwest Educational Research Association, San Antonio, Texas. Marsh, H. W., Hau, K. T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis- testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and Bentler's (1999) findings. Structural Equation Modeling, 11, 320-341. Thomas, D. R. (1992). Interpreting discriminant functions: A data analytic approach. Multivariate Behavioral Research, 27, 335-362. Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of Personality and Social Psychology, 54, 1063-70. 86 Chapter Five: Contribution, Limitation, and Future Research 5.1 Recapitulation This dissertation began with a review of the recommendations and practices of interpreting a multidimensional factor model. In particular, this review highlighted three major interpretational complexities with an oblique factor model. The first complexity arises from the inconsistency in the calculation and meaning of two oblique coefficients: pattern and structure coefficients. Often, researchers proceed with the interpretation either without noticing such inconsistency or choose to interpret only one coefficient with no rationale provided. The current literature has recommended addressing this problem by juxtaposing and interpreting both coefficients. This recommendation may have advanced the interpretational practice by attending to the distinctive information in each coefficient. However, mere juxtaposing does not really resolve the interpretational problems, and may actually further complicate the interpretation if the two types of coefficients lead to inconsistent conclusions. The second complexity considers the distortion of additive properties due to factor obliquity. Additive properties simplify the interpretation of a factor solution and make the interpretation mathematically straightforward. Due to factor obliquity, neither the pattern nor the structure coefficients hold the horizontal additive property that warrants horizontal interpretation. Over the last four decades, the distortion of horizontal additive properties may have regrettably resulted in the underutilization of horizontal interpretation, which is rooted in the fundamental theory of factor analysis. To overcome this interpretational difficulty, orthogonal constraints are often unduly forced even when the existing theory or the empirical data suggest that the factors are oblique. The third complexity considers the inappropriateness of the traditional rules for a "meaningful" relationship between the observed variables and the factors. The traditional rules 87 suggested for the orthogonal loadings should satisfy three conditions. That is, the loadings (1) should identically represent the pattern and structure relationships, (2) are horizontally additive, and (3) are bounded within -1 and 1. These commonly practiced rules are not equally applicable for the pattern and structure coefficients because they (1) represent distinctive information, (2) are not horizontally additive, and (3) may exceed the bounds of —1 and 1. As far as we are aware, the current literature has not addressed this problem nor has it provided any solutions. The dissertation adapts the method of Pratt's importance measures used in multiple regression to factor analysis and explicates how this new method can simultaneously resolve the three interpretational complexities arising from factor obliquity. 5.2 Novel Contributions This dissertation has made eight novel contributions to the understanding and use of factor analysis. First, this dissertation systematically articulates three interpretational problems inherent in oblique factor models that have often gone unnoticed or unattended in the applied research literature. The first two have been historically discussed yet remain unresolved by the current methodology literature, and the third is articulated for the first time in this dissertation. This dissertation further highlights and critiques the inappropriateness of the conventional solutions and common practices for dealing with these problems. Second, this dissertation is the first ever attempt at ordering the importance of latent variables for multivariate data. Although the axiom and geometry of Pratt's importance method and its use in regression have been well documented and established since the 1980s by Pratt and subsequent scholars, the application of Pratt's importance measures has been limited to ordering the importance of observed independent variables for univariate data. Recently, Pratt's importance measures have been used as a validation tool to order the contribution of observed and/or latent independent variables to a single latent variable (Zumbo, 2007; Zumbo, Wu, & Liu, 88 2008). To date, this dissertation is the first development of Pratt's measures to order the importance of latent variables underlying a set of observed variables simultaneously. The real strength of the new method is its ability to order the importance of latent variables that are mutually correlated, a task that has never been accomplished with theoretical and mathematical justification as provided by the Pratt's measures method. Third, Pratt's measures method resolves the three interpretational problems due to factor obliquity articulated by this dissertation. Chapter Three justifies and demonstrates the use of Pratt's measures in EFA. It provides theoretical rationales and two real data demonstrations to substantiate the use of Pratt's measures matrix in EFA. It demonstrates how the three interpretational problems can be easily resolved through a simple transformation of the pattern and structure coefficients into unified Pratt's measures. The interpretational problem regarding the inconsistency between P and S is resolved by the capacity of Pratt's measures to synergize the distinctive information in each coefficient. This avoids the traditional dilemma of choosing between the pattern and structure coefficient for interpretation when the two produce incompatible conclusions. The interpretational problem regarding the distortion of additive properties is resolved by the capacity of Pratt's measures to restore the additive properties both horizontally and vertically. In particular, the restoration of the horizontal additive property may fuel a revival of horizontal interpretation that has almost been forgotten since oblique rotation methods became popular. Furthermore, the restoration of horizontal additive properties will provide a powerful tool for measurement validation in terms of examining the contribution of correlated constructs to the variation in the item responses. The Pratt's measures method also partly resolves the third interpretational problem regarding the inappropriateness of the traditional rules for meaningful cut-offs. Under the circumstance of a simple redundancy relationship, Pratt's measures are bounded within 0 and 1 89 and uniquely represent the proportion of variation explained by the individual factors. This is consistent with the interpretation schema held by the traditional cut-off rules suggested under the premise of orthogonality. The fourth contribution is made through the application of Pratt's measures in CFA. Chapter Four demonstrates that interpreting the structure coefficients of a CFA as advocated by Graham, Guthrie, and Thompson (2003) can be problematic. By construing the calculation of the structure coefficient, we prove that the structure coefficient can be deceptive due to the factor obliquity, to the extent that the entire relationship is spurious when the corresponding pattern coefficient is constrained to be zero. We further show that the structure coefficient could lead to varying degrees of mistaken identification of a factor's importance as the combined results of factorial complexity and obliquity. Our thesis and proof refute the current recommendation for juxtaposing the pattern and structure coefficients for interpreting an oblique CFA. Although the thesis and proof were set in the context of CFA, the same conclusions can be made for EFA models because an EFA can be seen as a special case CFA with no constraints on the pattern coefficients. Fifth, with two empirical data examples tested in CFA, we also show that an EFA model chosen using the criterion of Pratt's measures >1/2p rule fits the data better than models chosen using other commonly practiced criteria. From an empirical perspective, this finding suggests that the Pratt's measures >1/2p rule depicts the oblique factor structure underlying the data better than the traditional rules. Results from these two investigations provide tentatively support for the < 1/2p rule for unimportance originally suggested by Thomas (1992). Sixth, at a broad level, the new method avoids the debate over the choice of oblique and orthogonal factor rotation. This claim is made because, through the method of Pratt's measures, all the mathematical advantages of an orthogonal model can now be easily achieved by an oblique model. In the literature review, we revealed that the oblique model is theoretically and 90 empirically preferred by the current methodology literature. However, the researchers are often compelled to the orthogonal solution for interpretation ease. When Pratt's method is used, researchers no longer have to sacrifice preference for factor obliquity for mathematical simplicity. In essence, the method of Pratt's measures ceases the dilemma of choosing between the advantage of the theoretical flexibility of an oblique model and the mathematical simplicity of an orthogonal model. Such historical debate is now dispensable because both advantages can be achieved by Pratt's measures. Seventh, to our knowledge, this dissertation may be the first to demonstrate and explicate the existence, mechanism, and implications of the suppression effect in factor analyses. For multiple regression, the methodology literature has accumulated sufficient discussions about the statistical mechanism of the suppression effect. Also, the substantive literature is replete with theoretical and empirical examples of suppression effects from various fields. However, there are few, if any, for factor analysis. This dissertation articulates how the suppressors complicate and invalidate the traditional interpretation rules suggested under the schema of simple redundant relationship. Data examples demonstrate the existence and mechanism of such factor suppression effects in both exploratory and confirmatory contexts. Eighth, by definition, the mathematics, application, and interpretation of Pratt's measures are directional both for regression and factor analysis. Taking a set of variables X 1 , X2, X3, and X4, for example, partitioning the explained variation of X1 by the remaining variables is mathematically and theoretically different from that of X2, X3, or X4 by the remaining set of the variables. For factor analysis, this axiom sets the framework for a directional interpretation of the factors' effect on the observed variables. When this framework is cemented with the additive partition capacity of Pratt's measures, it becomes a useful tool, even under factor obliquity, for consolidating the classic interpretation of factors — i.e., ascertaining the role of factors as the 91 underlying causes. This is an essential and desirable interpretation that has been losing its status since the introduction of oblique factor analysis. 5. 3 Caveats and Limitations This thesis and demonstration show that the Pratt's measure matrix resolves three interpretational problems of an oblique model, which can not be achieved by simply juxtaposing the pattern and structure coefficients as the current literature suggests. Nevertheless, it is crucial to realize that our suggestion is not to treat Pratt's measures as "oblique loadings" or to replace the use of the oblique coefficients. It should be fully realized that examining and comparing the pattern and structure coefficients can reveal a deep and rich story about a factor model including such issues as the factor suppression effect. The real purpose of Pratt's measures, in fact, is to disentangle the interpretational problems by integrating the information in the two coefficients. To have a thorough understanding of a factor model, we suggest that researchers aid their interpretation by incorporating the Pratt's measure matrix in addition to juxtaposing the oblique coefficients. As pointed out earlier, the method of Pratt's importance measures in factor analysis is model dependent. Namely, the importance of a factor is determined relative to the other factors extracted for the particular data. Because of this, it is erroneous and meaningless to compare factor importance across models with a different number of factors for the same data. For example, the relative importance of a factor in a three-factor model should not be compared to that of a four-factor model even if the same meaning and label are assigned to that factor. Hence, correct dimensional specification is a prerequisite for Pratt's measures to work effectively. If the Pratt's measure matrix is not interpretable, this may suggest that alternative models with a different number of factors as well as the rotation specifications should be explored. Also, the relative nature of Pratt's measures has a special implication when the Pratt's measures method is 92 used for multivariate data. That is, it is meaningless to compare the contribution of a factor based on q observed variables to that of q ± z variables (z being a positive integer) even if the q variables remain identical and the same meaning is assigned to that factor. That is, factor analyzing a set of multivariate data with q ± z variables, in essence, answers a different question from that with q variables. Also, because Pratt's measures are defined by the explained variance accounted for by the factors, researchers should be aware of how much of the explained variance they begin with prior to the application of the method. That is, if the communalities of the observed variables are unsatisfactorily low, application of Pratt's measures is of little meaning because one cannot make sense of something that has little to explain, a problem that neither other coefficients nor rotation methods can address. In this case, attention should be paid to the screening and selection of the observed variables before any interpretation of the factor solution. Another circumstance that may abate the capability of Pratt's measures in factor analysis is the occurrence of negative estimates, which is counterintuitive to the definition of importance for Pratt's measures. Occasionally, a negative Pratt's measure may cause the other Pratt's measure to be greater than 1 due to the fact that the sum of all Pratt's measures for a given observed variable is equal to 1. As explained earlier, small out of bounds Pratt's measures could be a result of chance capitalization because both pattern coefficient and structure coefficient are sample estimates, but Pratt's measures are population-defined. As for large negative Pratt's measures, they may result from the fact that a given factor model disobeys the simple redundant relationship and is too complex to be additively partitioned. These complex scenarios may include, but are not restricted to, suppression effect and multicollinearity. In the multiple regression literature, a negative suppression effect is defined as when the partial regression coefficient of an independent variable is of different sign to its Pearson correlation (Conger, 1974; Lancaster, 1999). Given the connection between multiple regression 93 and factor analysis, this definition is naturally applicable to factor analysis. That is, a negative suppression effect is present if the pattern coefficient is of different sign to its corresponding structure coefficient. Relatively large non-chance negative Pratt's measures can occur if a negative suppression is in effect. Since the denominator of Pratt's measures, the communality, is always positive, Pratt's measures would yield negative values only if the product term in the numerator, PS, is negative, which only happens if the pattern and structure are of different signs -- a definitive scenario of the negative suppression effect! A suppression effect can occur if the pattern and structure coefficients are of different signs but are not restricted to such cases. Two other types of suppression effects have been identified for multiple regression: the classic and reciprocal suppression effect. Conger (1974) and Lancaster (1999) gave very systematic accounts and examples for various types of suppression effects in multiple regression. Understanding and interpreting the suppression effect can be even more complex in factor analysis because multivariate data are involved. A suppression relationship may be interpretable and theoretically meaningful for one observed variable but not so for another. The suppression effect in factor analysis needs to be better understood and is a good topic for future research. Another complex scenario that abates the use of Pratt's measures is the problem of multicollinearity. Although there is not yet a direct proof, Thomas, Hughes, and Zumbo (1998) demonstrated that negative Pratt's measures are often associated with multicollinearity in multiple regression where the independent variables are highly correlated to an extent that the independent variable is completely linearly dependent (i.e., perfectly correlated). In an oblique factor model, high correlations among the factors can also create multicollinearity problems as in multiple regression. Unlike the suppression effect that may be of important theoretical and practical interest, multicollinearity is a statistical problem that can probably be avoided technically by simply removing or combining the highly redundant factors. 94 Thomas et al. (1998; 2006) reminded researchers that some models are so complex that no single measure of variable importance satisfies Pratt's axioms. Suppression effect and multicollinearity are two of these complex situations wherein all other coefficients and importance measures also encounter interpretational complexity (Thomas et al., 1998). 5.4 Suggestions for Future Research The current literature has built a rich body of literature for interpreting multiple regression, where a single dependent variable is regressed on a set of observed independent variables. To some extent, the MLM connection between multiple regression and factor analysis makes part of this rich literature of multiple regression transferable and compatible for factor analysis. Nonetheless, multiple regression is, after all, not the same as factor analysis, where multivariate dependent variables are regressed on a set of latent independent variables. As pointed out earlier, this dissertation is the first attempt to order the importance of a set of oblique latent factors for multivariate observed data. Certainly, more work is needed to understand the theoretical and mathematical mechanism of ordering factor importance. The following suggests four major areas of future research. Hypothesis Testing and Confidence Intervals of Pratt's Measures Pratt's measures, as we describe them, are descriptive statistics. A possible area for further research is to make inferences about the population parameters of Pratt's measures based on the sample estimates. This enables the researchers to hypothesis test whether Pratt's measures are greater than a particular value of their theoretical or practical interest such as the values of zero or 1/2p. Unfortunately, The finite sample distributional properties of Pratt's measures are technically complicated, even under the assumption of normal errors (Thomas, Zhu, & Decady, 2007). To date, the sampling distributions and the analytical calculation of the standard errors of Pratt's measures in multiple regression has yet to be identified. This implies that the sampling 95 distribution and standard errors for calculating the p-values are hard to track even for univariate data. Most likely, simultaneously testing a number of qxp of Pratt's measures is an even more arduous puzzle Fortunately, with the increasing capacity of modern computers, bootstrapping has become a popular method for making inferences about population parameters. Bootstrapping is a computer-intensive, non-parametric technique for statistical inference where parametric inference is infeasible or involves very complicated formulas for calculating the standard errors. It makes use of the re-sampling technique from the empirical distribution of the observed data to estimate the standard errors and confidence intervals. Bootstrapping is a promising line of research for making inferences about Pratt's measures in factor analysis given that the traditional parametric inference has met with considerable difficulty in multiple regression. Besides, Thomas et al. (2006) developed a new method for calculation of the point estimate and confidence interval for Pratt's measures in multiple regression. This new method is based on an asymptotic analysis of the properties of Pratt's measures in the limit when the sample size approaches infinity. In this method, the asymptotic variances are estimated simultaneously for conducting the confidence intervals for Pratt's measures to the full set of the independent variables. The advantage of this method is that the calculation of these estimates requires only information routinely printed in the output of standard statistical programs. No new software is required. Thomas et al's (2006) study also conducted a simulation to examine how this asymptotic confidence interval performs for sample sizes encountered in practice. Their results showed the approximate variance estimate is suitably accurate for a sample size of 250 or more, and in many cases, for a sample size as small as 100. Also, the asymptotic confidence intervals provide good coverage numerically close to the nominal 95% level for a sample size of 250 or more. However, for smaller sample sizes, the asymptotic confidence intervals tend to be liberal. Although this 96 line of research was set in the context of multiple regression, it hints at a potential application in factor analysis. Cut-offs for the Pratt's Measures Thomas (1992) suggested that, as a general rule, variables with Pratt's measures <1 /2p be considered unimportant. We have observed from the EFA and CFA examples that the < 1/2p criterion seems to work reasonably well, at least better than the traditional cut-offs. However, this rule of was suggested based on an intuitive reasoning; namely, variables with less than half the average importance is unimportant. There are no empirical studies yet to investigate the appropriateness of the < 1/2p rule or suggest other cut-offs such as one for being important in addition to being unimportant. These gaps suggest that there is a need to conduct a systematic review based on a large set of existing data. The attempt is to explore the possibility of a common set of thresholds that classify Pratt's measures into easily interpretable categories. For example, categories such as (1) unimportant, (2) not important but not unimportant, and (3) important, which are commonly applicable to different areas of social and behavioural sciences. This type of research is similar to the studies conducted by Cohen (1988) in order to suggest effect sizes that are easily interpretable. The purpose is not to determine the unanimous criteria for what should be considered as "important" or "meaningful". Instead, the intention is to empirically explore whether there is a meaningful pattern of importance underlying Pratt's measures, fully acknowledging the diversity of disciplines and areas of research. Graphical Representation of the Pratt's Measure Matrix Bring (1996) and Thomas et al. (1998) used the geometry of least squares to interpret the Pratt's measures for multiple regression. Based on these geometric foundations, it is intuitively straightforward to provide a two-dimensional graphic of Pratt's measure matrix for users who have little knowledge of the geometry of least squares. Such a graphical depiction would help to 97 outline the story of a factor model characterized by a Pratt's measure matrix. Developing a graphical visualization could facilitate the presentation and communication of the new method. Interpreting the Factor Suppression Effect As mentioned earlier, there is little to no direct discussion about the existence, mechanism, and implications of the suppression effect in factor analysis. There is certainly a need for future work to systematically understand and interpret factor suppression effects. Future works in the following directions are suggested. First and most pressing is to acknowledge the presence of suppression effects in factor analysis. Much of the practice undertakes the interpretation without even noticing the possible presence of factor suppression effects. Second, future research can investigate the effect of oblique rotation method and the accompanying specifications on the presence and mechanism of the suppression relationship. Different rotation methods and specifications may lead to mathematically different suppression relationships. It is worthwhile exploring rotation criteria that opt for a mathematically meaningful and interpretable factor suppression model. Third, borne on the knowledge base of suppression effect in multiple regression, future work can focus on distinguishing, classifying, and interpreting different types of suppression effects in factor analysis. Fourth, Thomas et al. (1998) argued that because suppressor variables and non- suppressor variables contribute to the regression in entirely different ways, it is actually intuitive to assess the relative importance of the non-suppressors using Pratt's measures and separately assess the relative importance of the suppressors to the non-suppressors using the measure of (R2-R2NS)/R2 , where the RNs denotes the variance explained by the non-suppressor variables alone. It seems reasonable to seriously consider this recommendation for assessing the suppression effects in factor analysis. However, in factor analysis, the same factor may act like a suppressor for one observed variable but not for another. Or, the same factor may display a 98 negative suppression effect for one observed variable but other types for another. Thus, the formula of (h2-h2Ns)/h2 only works for individual observed variables. In conclusion, this dissertation is the first attempt to adapt Pratt's measures developed for multiple regression to factor analyses. It has solved, in crucial ways, the major interpretational difficulties in understanding and interpreting an oblique factor model. Moreover, it has instigated and formulated a line of research that is deserving of scholarly expertise and devotion in the future. 99 References Bring, J. (1996). A geometric approach to compare variables in a regression model. The American Statistician, 50, 57-62. Conger, A. J. (1974). A revised definition for suppressor variables: A guide to their identification and interpretation. Educational and Psychological Measurement, 34, 35-46. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates. Graham, J. M., Guthrie, A. C., & Thompson, B. (2003). Consequences of not interpreting structure coefficients in published CFA research: A reminder. Structural Equation Modeling, 10, 142-152. Lancaster, B. P. (1999, January). Defining and interpreting suppressor effects: Advantages and limitations. Paper presented at the annual meeting of Southwest Educational Research Association, San Antonio, Texas. Thomas, D. R. (1992). Interpreting discriminant functions: A data analytic approach. Multivariate Behavioral Research, 27, 335-362. Thomas, D. R., Hughes, E., & Zumbo, B. D. (1998). On variable importance in linear regression. Social Indicators Research, 45, 253-275. Thomas, D. R., Zhu, P. C., Zumbo, B. D., & Dutta, S. (2006, June). Variable importance in logistic regression based on partitioning an R2 measure. Paper presented at the 2006 Annual Meeting of Administrative Sciences Association of Canada (ASAC), Banff, Alberta. Thomas, D. R., Zhu, P. C., & Decady, Y. J. (2007). Point estimates and confidence intervals for variable importance in multiple linear regression. Journal of Educational and Behavioral Statistics, 32, 61-91. Zumbo, B. D. (2007). Validity: Foundational issues and statistical methodology. In C. R. Rao & S. Sinharay (Eds.), Handbook of statistics, Vol. 26: Psychometrics, (pp. 45-79). Zumbo, B. D., Wu, A. D., & Liu, Y. (2008, March). Variable ordering when using regression with latent variables. Paper presented at the Annual Meeting of the American Educational Research Association (AERA), New York. 100 Appendices Appendix A: The 24 Psychological Ability Tests in Holzinger & Swineford's (1939) Data Ti: Visual perception T2: Cubes T3: Paper form board T4: Flags T5: General information T6: Paragraph comprehension T7: Sentence completion T8: Word classification T9: Word meaning T10: Addition Ti l: Code T12: Counting dots T13: Straight-curved capitals T14: Word recognition T15: Number recognition T16: Figure recognition T17: Object - number T18: Number - figure T19: Figure - word T20: Deduction T21: Numerical puzzles T21: Problem reasoning T23: Series completion T24: Arithmetic problems 101 Appendix B: Definitions of Six Theory-Guided Dimensions of Psychological Weil-Being Autonomy (AU) High scorer: Is self-determining and independent; able to resist social pressures to think and act in certain ways; regulates behavior from within; evaluates self by personal standards. Low scorer: Is concerned about the expectations and evaluations of others; relies on judgments of others to make important decisions; conforms to social pressures to think and act in certain ways. Environmental Mastery (EM) High scorer Has a sense of mastery and competence in managing the environment; controls complex array of external activities; makes effective use of surrounding opportunities; able to choose or create contexts suitable to personal needs and values. Low scorer: Has difficulty managing everyday affairs; feels unable to change or improve surrounding context; is unaware of surrounding opportunities; lacks sense of control over external world. Personal Growth (PG) High scorer: Has a feeling of continued development; sees self as growing and expanding; is open to new experiences; has sense of realizing his or her potential; sees improvement in self and behavior over time; is changing in ways that reflect more self knowledge and effectiveness. Low scorer: Has a sense of personal stagnation; lacks sense of improvement or expansion over time; feels bored and uninterested with life; feels unable to develop new attitudes or behaviors. Positive Relations with Others (PR) High scorer Has warm, satisfying, trusting relationships with others; is concerned about the welfare of others; capable of strong empathy, affection, and intimacy; understands give and take of human relationships. Low scorer: Has few close, trusting relationships with others; finds it difficult to be warm, open, and concerned about others; is isolated and frustrated in interpersonal relationships; not willing to make compromises to sustain important ties with others. Purpose in Life (PL) High scorer: Has goals in life and a sense of directedness; feels there is meaning to present and past life; holds beliefs that give life purpose; has aims and objectives for living. Low scorer: Lacks a sense of meaning in life; has few goals or aims, lacks sense of direction; does not see purpose of past life; has no outlook or beliefs that give life meaning. Self-acceptance (SA) High scorer: Possesses a positive attitude toward the self; acknowledges and accepts multiple aspects of self including good and bad qualities; feels positive about past life. Low scorer: Feels dissatisfied with self; is disappointed with what has occurred in past life; is troubled about certain personal qualities; wishes to be different than what he or she is. 102 Appendix C: SPSS Syntax for Input Tetrachoric/Polychoric Correlation Matrix and Factor Analysis * Replace codes that are highlighted in bold according to your data. matrix data variables=V1 to V9 /files= "C:ITIMSS_TIMEITIME.dat" /format free lower / N= 8385 / contents=corr. FACTOR /matrix=in(cor=*) NARIABLES vl v2 v3 v4 v5 v6 v7 v8 v9 /MISSING LISTWISE /ANALYSIS vl v2 v3 v4 v5 v6 v7 v8 v9 /PRINT INITIAL EXTRACTION ROTATION /PLOT EIGEN /CRITERIA FACTOR(3) ITERATE(25) *"Factor(3)" means extracting a specified number of factors, three for the present example * or use "MINEIGEN(1)", eigen value greater than 1 extraction rule /EXTRACTION ULS *ULS: unweighted least squares /CRITERIA ITERATE(25) /ROTATION PROMAX(4). * can use other oblique rotation methods * "(4)" means kappa value is set to be 4 103 Appendix D: Positive Affect Negative Affect Schedule (PANAS) Directions This scale consists of a number of words that describe different feelings and emotions. Read each item and then circle the appropriate answer next to that word. Indicate to what extent you have felt this way during the past week. Use the following scale to record your answers. (1) = Very slightly^(2) = A little^(3) = Moderately^(4) = Quite a bit^(5) = Extremely or not at all Very slightly or not at all A little Moderately Quite a bit_ Extreme ^. ^Interested 2. Distressed 1 2 2 3 4^, 4 .^.. 3. Excited 1 2 3 4 4^1 1 -dset I -,._ 3 4 4.^Strong 6^6 uilt■ I 2— 4 5 7. Seared 8. Hostile 1 -,_ 4 5 9.^Enthusiastic 1 2 10^Proud I -,_ 4 5 11.^Irritable 1 2 5, 12^Alert 1 2 3 4 5 13: Ashamed 1 4 14. inspired I 4 5 15. Nervous 4 16^Dterinined 1 1_ 4 5 7. Attentive 4 18. Jittery 1 2 3 4 5 N. Active I — 3 3 4 420. Afraid 1 2 5 104

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 19 17
United States 14 4
Canada 5 0
India 4 0
Russia 4 0
United Kingdom 2 0
Hong Kong 2 0
France 2 0
Peru 2 0
City Views Downloads
Beijing 19 0
Unknown 14 0
Washington 4 0
Gurgaon 3 0
Burnaby 2 0
Sunnyvale 2 0
Athens 2 2
Mountain View 2 2
Lima 2 0
Vancouver 1 0
Calgary 1 0
Charlottetown 1 0
Ashburn 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}

Share

Share to:

Comment

Related Items