UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Relaxed methods for evaluating measurement invariance within a multiple-group confirmatory factor analytic framework Brace, Jordan Campbell

Abstract

Measurement Invariance (MI) refers to the equivalent functioning of psychometric instruments when applied across different groups. Violations of MI can lead to spurious between-group differences, or obscure true differences, on observed scores, means, and covariances. Chapter 1 introduces the multiple-group confirmatory factor analysis (MGCFA) approach to evaluating MI. The present research seeks to identify overly restrictive assumptions of the MGCFA approach, and to provide alternative recommendations. Chapter 2 notes that typical MGCFA MI models assume equivalent functioning of each item, while in practice, applied researchers are often primarily interested in equivalent functioning of composite scores. Chapter 2 introduces an approach to assessing MI of composite scores that does not assume MI of all items, by placing between-group equality constraints on measurement parameter totals. Invariance of parameter totals is referred to as “scale-level MI”, while the invariance of individual measurement parameters is referred to as “item-level MI.” Power analyses of tests of scale-level and item-level MI illustrate that, despite item-level MI models being nested within scale-level MI models, tests of scale-level MI are often more sensitive to violations of MI that affect the between-group comparability of composite scores. Chapter 3 introduces an approach to quantifying between-group differences in classification accuracy when critical composite scores are used for selection and a minimum of partial scalar MI – MI of some, but not all, loadings and intercepts – is retained. Chapter 3 illustrates that different patterns of violations of MI differentially affect classification accuracy ratios for different measures of classification accuracy. Between-group differences on multiple sets of measurement parameters can have compensatory or additive effects on classification accuracy ratios. Finite sample variability of classification accuracy ratios is discussed, and a Bollen-Stine bootstrapping approach for estimating confidence intervals around classification accuracy ratios is recommended. Chapter 4 addresses limitations of popular methods of assessing fit of nested MI models. Chapter 4 introduces a modified RMSEA, RMSEAD, for comparing fit of nested MI models, which avoids the sensitivity to minor misspecifications of chi-square tests, as well as the differential interpretation of ΔGFIs depending on model degrees of freedom. Recommendations, limitations, and future research are discussed in Chapter 5.

Item Media

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International