UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Statistical power for repeated measures anova Potvin, Patrick John 1996

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_1996-0268.pdf [ 8.77MB ]
Metadata
JSON: 831-1.0077309.json
JSON-LD: 831-1.0077309-ld.json
RDF/XML (Pretty): 831-1.0077309-rdf.xml
RDF/JSON: 831-1.0077309-rdf.json
Turtle: 831-1.0077309-turtle.txt
N-Triples: 831-1.0077309-rdf-ntriples.txt
Original Record: 831-1.0077309-source.json
Full Text
831-1.0077309-fulltext.txt
Citation
831-1.0077309.ris

Full Text

S T A T I S T I C A L P O W E R F O R R E P E A T E D M E A S U R E S A N O V A by P A T R I C K J O H N P O T V I N B . S c , Concordia University, 1988 A THESIS S U B M I T T E D I N P A R T I A L F U L F I L L M E N T O F T H E R E Q U I R E M E N T S F O R T H E D E G R E E O F M A S T E R O F S C I E N C E in T H E F A C U L T Y O F G R A D U A T E S T U D I E S (School of Human Kinetics) We accept this thesis as conforming to the required standard T H E U N I V E R S I T Y O F B R I T I S H C O L U M B I A Apr i l 1996 © Patrick John Potvin, 1996 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department The University of British Columbia Vancouver, Canada DE-6 (2/88) Abstract Determining power a prior for univariate repeated measures (RM) A N O V A designs is a difficult and often excluded practice in the planning of experimental research. Complicated procedures and lack of accessibility to computer power programs are among some of the problems which have discouraged researchers from perforrning power analysis on these designs. Another more serious issue has been the lack of methods available for estimating power of designs with two or more R M factors. Due to uncertainties on how to compute an appropriate error term when more than one variance-covariance matrix exists, analytical methods for approximating power are currently restricted to R M designs with only one within-subjects variable. The purpose of this study therefore, was to facilitate the process of power detennination by providing a series of power tables for A N O V A designs with one and two within-subject variables. A secondary objective was to investigate less well known power trends among A N O V A designs having heterogeneous (nonspherical) correlation matrices or two R M factors. Power was generated using analytical and Monte Carlo simulation methods for varying experimental conditions of sample size (5, 10 , 15, 20, 25 & 30), effect size (small, medium & large), alpha (.01, .05 & .10), correlation (.4 & .8), variance-covariance matrix patterns (constant, e=1.00 and trend, e<.56) and levels of R M (3, 6 & 9). Examination of power results revealed that under conditions of nonsphericity (trend matrix pattern), power was found to be greater at small effect sizes and lower at medium and large effect sizes compared to those values generated under conditions involving spherical (constant matrix) structures. Regarding designs with two R M factors, power of main effects tests was observed to be greatest for a given condition so long as the average correlation among trials of the pooled factor was equal to or below that of the main effects factor. For interaction tests of the same model, power was found to be greatest for a given condition when at least one factor had an average correlation across its trials equal to .80. From simulation results, the relationship between error variance and power across different correlation matrices of the two-way R M design was examined and approximations of the noncentrality parameter for each test of this model were derived. ii Table of Contents Abstract ii Table of Contents iii List of Tables v List of Figures vii Acknowledgment ix Chapter One Introduction 1 mtroduction 2 Purposes of Study 6 Definitions 7 Chapter Two Literature Review 9 I. Factors Related To Power 10 II. Power, RM ANOVA Assumptions and Violation of Sphericity 20 III. Power Determination For RM Designs 27 Study Expectations 38 Hypotheses 39 Chapter Three Methodology..... 40 I. RM ANOVA Designs and Experimental Conditions 41 II. Power Determination 45 III. Accuracy and Reliability of Power Estimates 47 Delimitations of Study 49 Chapter Four Results 50 I. One-Way Repeated Measures ANOVA 51 II. Two-Way Repeated Measures ANOVA 63 III. Two-Way Mixed ANOVA 84 Chapter Five Discussion 99 I. One-Way Repeated Measures ANOVA 100 II. Two-Way Repeated Measures ANOVA 112 Chapter Six Summary and Conclusions 124 References 129 Appendix 1.0 Letter Requesting Data From Authors 134 Appendix 2.1-2 J Empirical Data Collected From Various Studies and Used To Deterrnine Experimental Conditions of This Study 135 Appendix 3.0 FORTRAN Program For Calculating Noncentrality Parameter and Effect Sizes (d & f) 138 Appendix 4.1-43 Cell and Marginal Means of RM ANOVA Designs 148 Appendix 5.0 Function Used To Compute Effect Size (d) For Tests of Interaction in Two-Way (AxB) ANOVA Designs 153 Appendix 6.1-6.2 Correlation Matrices For ANOVA Designs 154 Appendix 7.1-7.4 Effect Size and Noncentrality Parameter Values For RM ANOVA Designs ; 165 Appendix 8.0 Seeds Used For Monte Carlo Simulations 169 List of Tables Table 2.1. Summary of Relevant Studies Related To Power and Sample Size Estimation For RM ANOVA Designs 35 Table 2.2 Comparison of. Computer Programs Available For Detennining Power in RM ANOVA Designs 37 Table 3.1 Methods and Computer Programs Used to Determine Power For the Different Repeated Measures (RM) ANOVA Designs of This Study 47 Table 4.1 Power For a One-Way Repeated Measures ANOVA With 3 Levels 52 Table 4.2 Power For a One-Way Repeated Measures ANOVA With 6 Levels 53 Table 4.3 Power For a One-Way Repeated Measures ANOVA With 9 Levels 54 Table 4.4a Power of the A Main Effect For a 3(A) x 3(B) ANOVA With Repeated Measures on Two Factors 64 Table 4.4b Power of the B Main Effect For a 3(A) x 3(B) ANOVA With Repeated Measures on Two Factors 65 Table 4.4c Power of the AB Interaction For a 3(A) x 3(B) ANOVA With Repeated Measures on Two Factors 66 Table 4.5a Power of the A Main Effect For a 3(A) x 6(B) ANOVA With Repeated Measures on Two Factors 67 Table 4.5b Power of the B Main Effect For a 3(A) x 6(B) ANOVA With Repeated Measures on Two Factors 68 Table 4.5c Power of the AB Interaction For a 3(A) x 6(B) ANOVA With Repeated Measures on Two Factors 69 Table 4.6a Power of the A Main Effect For a 3(A) x 9(B) ANOVA With Repeated Measures on Two Factors. 70 Table 4.6b Power of the B Main Effect For a 3(A) x 9(B) ANOVA With Repeated Measures on Two Factors 71 Table 4.6c Power of the AB Interaction For a 3(A) x 9(B) ANOVA With Repeated Measures on Two Factors .72 Table 4.7a Power of the Groups Main Effect For a 2(Groups) x 3(Trials) ANOVA With Repeated Measures on One Factor 85 Table 4.7b Power of the Trials Main Effect For a 2(Groups) x 3(Trials) ANOVA With Repeated Measures on One Factor. 86 Table 4.7c Power of the Interaction Test For a 2(Groups) x 3(Trials) ANOVA With Repeated Measures on One Factor. 87 Table 4.8a Power of the Groups Main Effect For a 2(Groups) x 6(Trials) ANOVA With Repeated Measures on One Factor 88 Table 4.8b Power of the Trials Main Effect For a 2(Groups) x 6(Trials) ANOVA With Repeated Measures on One Factor. 89 Table 4.8c Power of the Interaction Test For a 2(Groups) x 6(Trials) ANOVA With Repeated Measures on One Factor. 90 Table 4.9a Power of the Groups Main Effect For a 2(Groups) x 9(Trials) ANOVA With Repeated Measures on One Factor 91 Table 4.9b Power of the Trials Main Effect For a 2(Groups) x 9(Trials) ANOVA With Repeated Measures on One Factor 92 Table 4.9c Power of the Interaction Test For a 2(Groups) x 9(Trials) ANOVA With Repeated Measures on One Factor. 93 Table 5.1. Mean Statistics Generated From Monte Carlo Simulation (replications = 3000) For Different Two-Way RM ANOVA Designs Under Medium Effect Size 114 vi List of Figures Figure 3.01. Experimental conditions of R M A N O V A designs power was generated for in this study 44 Figure 4.01 Comparisons of power between one-way repeated measures A N O V A (K=6) with constant and trend correlation matrices under varying effect sizes 57 Figure 4.02. Comparisons of power for one-way A N O V A designs with 3,6, and 9 repeated measures (RM) under constant and trend correlation matrices and small effect size (.2) 59 Figure 4.03. Comparisons of power for one-way A N O V A designs with 3,6, and 9 repeated measures (RM) under constant and trend correlation matrices and large effect size (.8) 60 Figure 4.04. Power for one-way A N O V A designs with 3,6, and 9 repeated measures under varying correlation matrices, effect sizes and alpha 62 Figure 4.05. A comparison of power across different correlation matrices for tests of a 3 x 6 R M A N O V A design , 74 Figure 4.06. Change in power for the " A " test of a two-way R M A N O V A as the number of levels of factor " B " increase under varying effect sizes and correlation matrices 78 Figure 4.07. Change in power for the " B " test of a two-way R M A N O V A as the number of levels of factor " B " increase under varying effect sizes and correlation matrices 79 Figure 4.08. Change in power for the " A B " test of a two-way R M A N O V A as the number of levels of factor " B " increase under varying effect sizes and correlation matrices 81 Figure 4.09. A comparison of power between A , B and A B tests of the two-way R M A N O V A under different levels of factor B (3, 6, and 9) and correlation matrices 83 Figure 4.10. Comparisons of power between two-way mixed A N O V A designs for the Groups, Trials and Groups by Trials tests as the average correlation among repeated trials is increased 95 Figure 4.11. Comparisons of power between two-way mixed A N O V A designs with different levels of R M (3,6 and 9) under varying effect sizes for the Groups, Trials and Groups by Trials tests 96 Figure 4.12. A comparison of power between tests of a 2 x 6 mixed A N O V A design as the average correlation among repeated trials is increased 98 vii Figure 5.01. F distributions for one-way repeated measures A N O V A designs when effect size is small (.2) and the pattern of the correlation matrix is altered 102 Figure 5.02. F distributions for one-way repeated measures A N O V A designs when effect size is medium (.5) and the pattern of the correlation matrix is altered. 103 Figure 5.03. F distributions for one-way A N O V A designs with 3,6 and 9 repeated measures under a constant correlation matrix and varying effect size 108 Figure 5.04. F distributions for one-way A N O V A designs with 3,6 and 9 repeated measures under a trend correlation matrix and varying effect size 110 viii Acknowledgment I would like to extend a special thanks to Dr. Schutz for his tremendous help, guidance and support throughout this one and a half year project. He rescued me when it seemed I would drown in a sea of academic discontent and gave me an opportunity to move in a new and challenging direction. Dr. Schutz represents the true researcher, fully dedicated to his field and the pursuit of knowledge. He is the epitome of the ideal graduate advisor and I feel privileged to have learnt from him and to have worked under his wing. I would also like to thank Dr. Martin and Dr. Courts for agreeing to be a part of this project and extending their interests in an area that was rather removed from their own. Lastly, I would like to thank Michelle for being as wonderful as she is. She gave me the confidence and courage to undertake such a project and was the light at the end of my tunnel each and every day. Her love and support is my foundation of strength and I love her very much. "First choo get da money; then when choo get da money, choo get da Power. And when choo get da Power, then choo get da women! " - Tony Montana, Scareface. ix Introduction Chapter One Introduction Potvin '96 1 Introduction An important part in the planning and formulation of experimental research involves detennining a study's power to show a statistically significant effect Power, in hypothesis testing, represents the probability of detecting a true effect of given magnitude or more specifically, the chance of rejecting a null hypothesis when a difference in the population actually exists. Determining a study's power a priori is instrumental in helping the researcher decide whether an experiment is worth the time, money and effort required to conduct it. An investigation having low power stands a small chance of showing an effect and therefore should not be pursued without at least some modification to its experimental design (Olejnik, 1984). McKlifying a research design in order to maximize its chances of showing a true effect requires an understanding of those statistical parameters which influence power. These include sample size or the number of subjects used in the study, the expected effect size or magnitude of the difference between means judged meaningful, the significance level (alpha), the error variance associated with the dependent variable(s) and the type of statistical analysis being used (Cohen, 1988; Kraemer & Thiemann, 1987; Lipsey, 1990; St. Pierre, 1980). Generally, an increase in sample size, effect size or level of significance increases the power of an experimental design while an increase in error variation reduces it. Among types of designs, those which take into consideration more information about subjects (i.e. A N C O V A , repeated measures A N O V A ) tend to provide greater power (Olejnik, 1984) over those which account for less (i.e. randomized group A N O V A ) . Since the process of power estimation requires previous knowledge of these factors, power analysis serves as a helpful adjunct in understanding the limitations and strengths of a study. Despite its importance and obvious benefit to the researcher, power analysis is frequently a neglected component in experimental planning (Cohen, 1988; Howell, 1992). One of the main reasons for this is because methods of power determination are often laborious and computationally complex. For the simpler statistical procedures such as t-tests, significance testing of correlation, and randomized group Potvin '96 2 Introduction analysis of variance (RG A N O V A ) , the process of power estimation is relatively straight forward. Computer programs, tables of power values and analytical formulae are abundant in the literature and provide simple, quick and in most cases, reliable methods for determining the power of these tests (Borenstein & Cohen, 1988; Bradley, 1989; Cohen, 1988; Kraemer & Thiemann, 1987; Lipsey, 1990; PASS, 1991; SOLO, 1992). For other designs, the process of power ^termination is not as straightforward. This is particularly evident in the case of repeated measures analysis of variance designs ( R M A N O V A ) where several problematic issues exist which limit the implementation of power estimation for these tests. At the forefront is the perceived complexity and tedious nature of most analytical solutions described in the literature which discourage their use among researchers who lack a strong statistical background. Although both commercial and local computer programs have become available over the years to facilitate power analysis of these designs, these too incur limitations in that many are non user-friendly, difficult to access and in some cases, do not provide accurate estimates of power. A more serious limitation of power analysis for these designs is the lack of solutions available for certain types of R M A N O V A tests. Currentiy, a priori analytical methods for approximating power are mainly limited to simpler R M designs, specifically those with a single within-subjects variable. Davidson (1972) provided univariate and multivariate solutions for approximating power in the single repeated measures (RM) design while Marcucci (1986) presented power procedures for the one-way within-subjects and the two-way mixed experimental design using both a univariate and multivariate approach. Among other work closely related to power estimation, Vonesh and Schork (1986) derived sample size formulae for the one-way R M design using univariate and multivariate approaches while Rochon (1991) extended their work to include the two group within-subjects design using only a multivariate technique. What is evident from a survey of the literature on power analysis is the apparent lack of methods available for estimating power or sample size for A N O V A designs with more than one repeated measures factor. Potvin '96 3 Introduction Presently power solutions (univariate or multivariate) which effectively account for the various correlation matrices of multiple R M designs are deficient One of the main reasons for this apparent deficiency may be the inherent difficulty in deriving power approximations analytically when two or more distinct correlation (r) matrices are present within a design. Normally when deterrnining power analytically for a R M A N O V A with one r matrix (one within-subjects variable), the average r of the matrix is used in power calculations to express the amount of within-subject variance. However, when two or more R M factors are present, three types of average correlations exist, one for each independent matrix of the design (within-factor A , within-factor B and AB matrices) and it is not clear how these different matrices interact, i f at all, to affect the error variance and thus power of a particular test. Nor is it certain whether simply substituting the average r of a given matrix (instead of, for example, the mirnmurn correlation coefficient) in the power formulae of a particular test is an appropriate procedure to follow. Pilot work conducted by the researchers of this study (Potvin & Schutz, 1995) has revealed that the relationship between power and correlation for designs with multiple R M factors is more complicated than that expressed in the power formulae of designs with just one r matrix. That is, the average correlation coefficient of a given matrix in a multiple R M design does not adequately account for the change that occurs in the error variance of its respective test. Perhaps because of the uncertainty concerning this relationship, few attempts have been made to resolve this issue computationally. Although Winer, Brown and Michels (1991) and Dodd and Schultz (1973) provide a post-hoc procedure for deterrnining power for these designs using omega-squared, ©2, a measure of the magnitude of the experimental effect, such a method is not very practical for a priori power analysis since it requires the researcher to know ahead of time the mean square treatment and error variance of the test involved. Potvin '96 4 Introduction Another problem complicating computation of power for these designs is the pattern or variability among r coefficients of the matrix(cies) involved. When r coefficients within a matrix result in a heterogeneous or simplex (trend) pattern, that is they decrease substantially in magnitude across the levels of the matrix, an important assumption of univariate A N O V A , called sphericity, is violated. Under this assumption, the variances of all pairwise differences between trial means involved should be about equal. When violation of the sphericity assumption has occurred (nonsphericity), using an average r value in existing power formulae becomes inappropriate since the effect on the error variance of the involved test is no longer expressed adequately by this variable alone. Although some methods have been developed to deal with this (Muller & Barton, 1989, 1991; Mulvenon, 1993; Rochon, 1991), they are complicated and in some cases, not appropriate for univariate tests, making power estimation difficult for designs with such r structures. In addition, examination of how power is affected under conditions of nonsphericity, has not been well documented. As a result of these problematic issues, power estimation, particularly for R M designs with multiple within-subject variables or heterogeneous r matrices remains difficult and/or obsolete. Currently, investigators whose experimental designs involve multiple R M factors or nonspherical r matrices are faced with either avoiding the power issue altogether or estimating values using procedures that are difficult and/or less practical and applicable for their design. Since such designs are frequently encountered in the field of human kinetics, it is important that the power of these statistical tests under varying conditions be determined and their values made available. While the complexity involved in deriving power estimates for these conditions may discourage attempts at resolving these issues analytically, an alternative but somewhat less accurate method for accomplishing this task is through Monte Carlo simulation. This process, which uses computer simulation to generate several hundred or thousand replications of a particular R M analysis test, can provide approximations of power based on the number of tests found significant By tabulating these results, both Potvin '96 5 Introduction power and sample size values can be made available for specific R M designs under varying conditions. In the past, Monte Carlo simulation has proven useful for approximating power in two-way mixed R M A N O V A models (Grima, 1987; Mendoza et al., 1974; Muller & Barton, 1989) but this method has not been extended to those designs involving two or more R M factors and has received only limited use among tests that do not meet the assumptions of sphericity. Purposes of Study The problems inherent with power estimation for R M designs served as the rationale for this present study. One of the main purposes of this investigation was to provide researchers in the field of human kinetics with a more accessible method for determining univariate power of one- and two-way A N O V A designs involving single and double R M factors. This was accomplished by generating power values using analytical and Monte Carlo methods and making available these estimates in the form of power tables for varying conditions of sample size, effect size, and magnitudes and patterns of correlation. A secondary purpose of this study was to describe some of the power trends which occurred under conditions of nonsphericity and among tests having two R M factors. Potvin '96 6 Introduction Definitions The following terms appear frequently throughout this dissertation and have been denned to facilitate understanding. One- and Two-Wav RM ANOVA A univariate analysis of variance test having one and two repeated measures (within-subjects) variables, respectively. Two-Way Mixed ANOVA A univariate analysis of variance test having one repeated measures (within-subjects) and one randomized-group (between-subjects) variable. Test Refers to any of the F tests in a one- or two-way ANOVA design (main effects and interaction). Experimental Condition Refers to any of the statistical parameters in a design including effect size, sample size, alpha, number of repeated measures and magnitude and pattern of correlation. Potvin '96 7 Introduction Constant r matrix pattern (O Refers to a correlation (r) structure in which all coefficients included are equal in magnitude and meet the assumption of sphericity (see section II of chapter 2 for a definition of this assumption). Synonymous terms include a "spherical design/condition" and "r structures with high epsilon (1.00)". Trend r matrix pattern Refers to a r structure in which the coefficients decrease in magnitude across the levels of the matrix (simplex pattern) resulting in violation of the assumption of sphericity. Synonymous terms include a "nonspherical design!condition" or "r structures with low epsilon (<.56)'\ Pooled Factor Refers to the factor (e.g. A or B) in a two-way A N O V A whose levels (scores) are averaged-over or "pooled" to produce a single score for each level of the other factor. A synonymous term for this is an "averaged-over factor". Potvin '96 8 Chapter Two Literature Review Potvin '96 Literature Review This chapter discusses important aspects related to power analysis of R M designs. The first section includes an explanation of those factors which influence power and how they are interrelated. The second section discusses vital assumptions about R M designs and illustrates how violation of some can affect power of the univariate F test. The last section concludes with a review of methods currently available for estimating power in R M A N O V A . I. Factors Related to Power The following provides a brief explanation of the theory behind power estimation and how effect size, level of significance, sample size, correlation and the noncentrality parameter relate to statistical power. Noncentrality Parameter (X) When the null hypothesis is false, the F ratio (MS trials/MS error) for the one-way R M A N O V A or main effect of a factorial R M test no longer assumes a central F distribution (Howell, 1992; Winer et al., 1991). Rather it follows a noncentral F distribution based upon the noncentrality parameter, X (lambda), where, E(F) = (2.00) df2-2{ 4f, and, for a one-way R M A N O V A (2.01) Potvin '96 10 Literature Review for the grouping factor (A) main effect (2.02) of a two-way mixed A N O V A XB = for the R M factor (B) main effect (2.03) of a two-way mixed A N O V A n£ ^(y-jj — \it — \ij + \if f ° r m e grouping x R M Interaction (AxB) (2.04) of a two-way mixed A N O V A E ( F ) represents the expected value for the overall F ratio, df, and df2 are the degrees of freedom for the the levels of the randomized group (RG) and repeated measures (RM) factors respectively, \i = the grand mean, n = the sample size per group, p and q = the number of levels of the R G and R M factors respectively andG ] = the error variance for the specific effect (Bradley 1989). The noncentrality parameter represents the factor by which the F ratio departs from the central F distribution and signifies the true distribution of F when a difference between means actually exists (Howell, 1992; Winer et al., 1991). Closely associated to X, is another statistic, <j) (phi), which is a function of X and the number of trials and/or groups in the design. For a one-way R M A N O V A it is represented as, numerator and denominator of the F statistic and |Xij = the cell mean, \Lj and \ij = the marginal means for (2.05) Potvin '96 11 Literature Review While some researchers use X (Bradley, 1989; Howell, 1992) and others <|> (Winer et al., 1991) in their discussions on power, both are considered general representations of a noncentral measure of the F distribution. The relationship between power and X is a curvilinear one. Generally, as X increases so does the power of the test until a maximum level is reached at which point any further increase in X has no effect on power. Therefore, an experimental design with a large difference between means (e.g. a big Z (UJ-|A ) 2 ) will have greater power than one with a smaller difference, all other conditions being constant. Lu i and Cumberland (1992) and Vonesh and Schork (1986) expressed the relationship between X and power using the cumulative distribution function, where F(a , dft, df2) is the upper etth percentile of the F-distribution with df, and df3 degrees of freedom and F(w, dflt df2, X) is the non-central F-distxibution with the noncentrality parameter X. When a true effect exists, most other factors related to power exert their influence by either increasing or decreasing X. Effect size (ES) represents the magnitude of the difference between means or the extent of the treatment effect in standard deviation units (Lipsey, 1990). Cohen (1988) describes ES for a design involving only two groups as the standardized difference between the means represented by d, where, (2.06) Effect Size (2.07) Potvin '96 12 Literature Review and u«-u* is the difference between the population means and o = the common or pooled standard deviation of the two groups. Since both the numerator and denominator are expressed in the units of the dependent variable, ES, like X, is unitless. For those designs with three or more groups (RG ANOVA), Cohen (1988, p.275) expresses ES as / , the 'standard deviation of the standardized means' represented by, where k = the number of groups in the design and = the average within-groups error variance. Since both d and/are representations of ES, they should be related to one another. Cohen (1988) shows this relationship for the condition where population means are evenly spread apart as follows: . LL max—Li min where d = - — , (2.10) and u.«„ - [L* is the difference between the maximum and minimum population means. As shown by Winer et al. (1991) and Bradley (1989), under the two-way mixed ANOVA model, a slight modification to Cohen's/is required since the appropriate error term is different from that of the complete RG model for each test involved. For a main effect and interaction term involving one repeated Potvin '96 13 Literature Review measures factor, the error variance is represented by o\WixT, the subjects within-groups by trials interaction which can be calculated from a s follows: o^(l-p) , where p is the average of the k(k-l)/2 correlation coefficients among the k trials (Winer et al., 1991). For a main effect involving a ramiomized group factor, the error term, o2Swg , is expressed as o2Swg = + (q-l)p], the variance attributed to subjects within groups, where q = the number of levels of the R M factor. Therefore, substituting these modified error terms for a R M A N O V A test,/can be expressed as, for the one-way R M test (2.11) for the grouping factor (A) main effect (2.12) of a two-way mixed A N O V A for the R M factor (B) main effect (2.13) of a two-way mixed A N O V A for the G x R M Interaction (AxB) (2.14) of a two-way mixed A N O V A It should seem evident from these formulae that effect size is direcdy related to X by, X = rikf2 for the one-way R M A N O V A , (2.15) X = npq/2 for main effects of the two-way mixed R M A N O V A , (2.16) Potvin '96 14 Literature Review and X = n[(p-l)(q-l) + l ] / 2 for interaction of the two-way mixed RM ANOVA. (2.17) Therefore any increase in effect size caused by a larger difference between means, a reduction in the variance of the dependent variable or a combination of both will increase X and thus power among these designs. Level of Significance (a) Level of significance or alpha (a), which represents the probability of rejecting a true null hypothesis (type I error) in significance testing, has a direct nonlinear relationship to power (Lipsey, 1990). This relationship is reflected in equation 2.06 which shows that when a is increased for any given numerator and denominator degrees of freedom, the critical F ratio of the central F distribution decreases (shifts to the left or more towards the central part of this distribution). Since power represents that proportion of the noncentral distribution immediately right of this value, a decrease in the critical F causes a greater proportion of the central F distribution to fall within the rejection zone, thereby increasing power. Unlike other factors related to power, a represents one of the few parameters a researcher has complete control over since it is set by the investigator him/herself. Unfortunately by convention, a is almost universally set at .05 or .01 and is rarely tolerated above the 5% level (Lipsey, 1990). Therefore the extent to which an alteration in a improves the power of a design is modest at best and in many cases the least effective of the related factors. Potvin '96 15 Literature Review Sample Size (n) Sample size or the number of subjects (n) in a group has a direct nonlinear relationship to power as expressed in equations 2.01-2.04 for X. Since n is found in the numerator of these formulae, it is clear that for any given R M design, an increase in n will result in an increase in X and therefore power, when all other parameters are held constant. A less mathematical approach to interpreting the relationship of n to power can be explained by the central limit theorem. This theorem states that the sampling distribution of the mean will approach a normal distribution as n increases (Howell, 1992). In other words, since the s sample mean is distributed normally with a standard deviation of —j=, an increase in n is more likely to produce a sample mean that deviates less from the true population mean. Therefore, i f a difference exists in the population or there is a true treatment effect, then the sample means are more likely to reflect this as n becomes larger. In a visual display of sampling distributions represented by large n's, this is reflected as less overlap between those distributions that are truly different from one another, an example of the greater power that exists with larger sample sizes. In an A N O V A test, the influence of increased sample size results in a larger mean sum of squares for the treatment effect (MS™,) since the variance associated with this effect is weighted by n (e.g. nZCuy-p.)2). The overall result is a larger numerator and therefore bigger F ratio. Correlation One of the advantages of R M designs over randornized group (RG) designs is that the former allow the overall variability of treatment scores to be reduced for those effects involving a R M factor (Howell, 1992). This is because the dependency or correlation that results among scores when the same subjects are used for all conditions offers an opportunity to reduce between subject differences from the Potvin *96 16 Literature Review within-groups ( R G ) error term (a 2). This was illustrated earlier in equations 2.11-2.14 in which/, the effect size index for randomized group ANOVA designs, was modified to account for correlated measures within the R M designs. These equations show how the correlation among the levels of the treatment condition modify the R G error variance to produce unique error terms for the different effects associated with the R M ANOVA design. Winer et al. (1991, pp. 261-267) explained these modifications mathematically. For a main effect or one way ANOVA involving a R M factor, the mean sum of squares error (MSER* ) is given by, MSKHB = var —cov or = o i - o i (2.18) where O 2 = the mean variance among the levels of the treatment condition and a 2 is the average of all the covariances in the variance-covariance matrix of the R M factor which, as Winer et al. (1991) point out, solely represents the variance attributed to between subject differences. Since a 2 =o~2p, with substitution we arrive at, M S E W =G 2-o 2p = a 2(l-p) or simply a 2(l-p) (2.19) which is used in the denominator of equations 2.11, 2.13 and 2.14. With subject differences reduced, the error term now represents the residual variance associated with the subjects by treatment interaction (one way design) or the subjects within-groups by treatment interaction (mixed factorial design). Potvin '96 17 Literature Review It should therefore seem evident that the degree to which the error variance is reduced, as shown in equations 2.11, 2.13 and 2.14, is dependent on the magnitude of the average correlation among the treatment conditions. As the average correlation increases, the degree to which the error term is reduced also increases. For a one-way R M A N O V A design or R M main effect and interaction of a mixed factorial test, this reduction in error variance produces a greater / and X and a concomitant increase in the power of the associated effect when a true difference exists. With regards to A N O V A test results, a reduction in the error term (MSHU, ) will produce a larger F value, all else being equal. Unlike other factors affecting power, the relationship between correlation and power is not consistent for all effects within a factorial R M design. Winer et al. (1991) describe that for main effects involving a grouping factor, the reduction in the MSm* (represented by MSs*,, the subjects within groups variance) is directly related to correlation by, M S E = var + (q- l)cov = a 2 + ( < 7 - l ) a 2 p = a 2 [ l + (<7-l)p] (220) Therefore, from equation 2.20 we see that in a mixed A N O V A design, the error term for a grouping main effect will actually increase as the number of levels and/or the correlation among the treatment conditions increases. This makes sense when one considers that averaging many highly dependent scores across levels of a R M factor will produce less residual variance among scores for each subject, enhancing between-subject differences within a group and therefore resulting in greater within-group variability than when many independent scores are averaged. In contrast, a lower correlation among scores will decrease differences between subjects due to a higher residual variance in each person's scores and thus reduce within-group error. Potvin '96 18 Literature Review The effects of altering the within-groups variance on the effect size and power of a mixed A N O V A test can be demonstrated by equations 2.12 and 2.16. As the error term increases with a rise in correlation,/and A, decrease resulting in less power to detect a true group effect. In a simulated A N O V A test, this will result in a smaller F ratio. The effects of correlation on the error variance and power of different tests in the two-way mixed A N O V A model, as described in this section, may also be applied to those R M designs with two or more randomized group factors. The only difference is that an additional variable is required in equations 2.12-2.17 for each new factor added to the design. However, for designs with two or more R M factors in which a separate variance-covariance matrix exists for each variable and interaction of variables having repeated measures, no mathematical derivations are available at present to explain what influences multiple matrices will have on the error variance of different tests. While it may seem the equation for calculating the appropriate error term in single R M factor A N O V A (2.19) may also apply to these tests, a major problem exists in deterrnining what correlation value to use in the calculations. Results from a preliminary project conducted by this researcher suggest that simply using the average correlation of each respective matrix in equation 2.19 will not suffice when other R M factors are present (Potvin and Schutz, 1995). This is because the average correlation of one factor seems to be influenced by the magnitude of the average correlation and number of repeated measures of the other factors) present in the design. Therefore, under these conditions, the effect on error variance of these tests can be expected to differ from what would otherwise be observed i f the average correlation coefficient of a pooled or overall matrix was simply used. This implies the relationship between correlation and error variance is different from that expressed in equation 2.19. It appears no other work has been undertaken to explain this relationship analytically for designs with two or more R M factors. Thus the effects of multiple R M variables on power is currently unknown. Potvin '96 19 Literature Review n. Power, RM ANOVA Assumptions and Violation of Sphericity A. Univariate Assumptions In addition to the usual statistical assumptions necessary for all ANOVA designs (multivariate normality, homogeneity of variances, linearity and independence across subjects/units ), RM tests are also restricted by certain assumptions concerning the structure of the variance-covariance matrix(s) involved. These include the assumptions of compound symmetry, circularity and sphericity. Compound Symmetry: When all the variances of treatment levels are equal (i.e. a^j = = o^j) as well as all their covariances (i.e. Oy£ = oy/ = o"£/). the resultant variance-covariance matrix, Z, is said to have a pattern of compound symmetry, CS, (Winer et al., 1991). Since Z is simply a different expression of the correlation matrix, this implies that the correlations between observations of any pair of treatment conditions in the matrix are also all equal (i.e. Pjk = Pjl = Pkl)- Huynh & Feldt (1970) showed that the assumption of equal correlations (compound symmetry) is a sufficient but not necessary condition for the univariate RM ANOVA. This means that so long as other less restrictive assumptions hold (to be discussed), violations in the pattern of compound symmetry will not require adjustments to the critical F ratio of the univariate test. Circularity: Unlike CS, the assumption of circularity does not require that all covariances of a matrix be homogeneous. Instead, Winer et al. (1991) demonstrated that if the variances of all pairwise differences Potvin '96 20 Literature Review between treatment means equal a constant, then the F test will be valid and not require adjustment This less restrictive assumption can be expressed algebraically as follows: a2j + G2^ - 2aft = 2A for all j , k pairs (2.21) where a2j and o ^ are the variances for a pair of treatment conditions, is the covariance for the pair and 2A is the constant Winer et al. also showed that since var-cov = A or O V O H = A (2.22) that is, the difference between the average variance and average covariance of a matrix is equal to a constant, circularity also implies that the residual error variances for all pairs of treatments are homogeneous (see also equations 2.18 & 2.19). Since CS is a special case of circularity, it is possible for a matrix to have circularity but not CS and still result in a valid F test. However, when the assumption of circularity is violated, the power or type I error rate of the F test can be seriously affected, requiring proper adjustment to the critical F value (Collier et al., 1967; Davidson, 1972; Huynh & Feldt, 1970; Muller & Barton, 1989; Mendoza et al., 1974; Rouanet & Lepine, 1970). Sphericity: The condition of sphericity simply represents an alternative expression of the property of circularity. Rather than presenting the variables of a matrix as is, it is helpful when using matrix algebra Potvin '96 21 Literature Review to transform the covariance matrix into an orthonormal matrix. An orthonormal matrix is one in which the rows of the covariance matrix are converted into normalized coefficients of orthogonal contrasts (i.e. like the coefficients used in trend analysis). As Winer et al. (1991) explain, under the assumptions of sphericity, an orthonormal matrix should have the property, Zy =M*Z^V/*'= Al (223) where M* is any orthonormal matrix of contrasts across repeated occasions with dimensions (k-1) x k, I,x represents the actual variance-covariance matrix, M*' is the inverted form of M*. A is a constant and / is an identity matrix with ones on the diagonal (variances) and zeros on the off-diagonal elements (covariances). According to Winer et al., an orthonormal matrix (Ly) having the form Al is said to be spherical. Since sphericity is an alternative form of the circularity condition, the ramifications incurred to the F test when the assumption of sphericity is violated are similar to those described previously for conditions of noncircularity. For the purpose of clarity throughout this dissertation, the term sphericity will be used to indicate either assumption. B. Multivariate Assumptions Apart from the univariate approach, statistical tests involving RM can also be analyzed under a multivariate model (Hotelling's T2 or MANOVA). That is, each trial of a RM design may be regarded as a separate dependent variable and treated as such in the analysis. Under the multivariate model, most assumptions described previously for the univariate technique also hold for this test with the exception of the assumption of sphericity which is not required (Muller, LaVange, Ramey & Ramey, 1992). For this Potvin '96 22 Literature Review reason, the multivariate technique is often considered a better choice for analyzing RM designs when sphericity is not met since it does not require adjustment of a test statistic (Davidson, 1972; Grima, 1987; Huynh & Feldt, 1970; Mendoza et al., 1974; Rouanet & Lepine, 1970; Schutz & Gessaroli, 1987). However, when the assumption of sphericity is met, the univariate model which results in a conventional F value (MSnah/MSisuO, offers a valid and attunes better approach (Davidson, 1972; Green, 1992; Mendoza et al., 1974; Muller & Barton, 1989). Thus, the decision of whether to use a univariate or a multivariate model for analysis of RM designs, in most cases, rests upon which test provides the greater power. Ideally, a researcher wishing to maximize the power of their RM design should determine power using both approaches and choose the technique offering the most power. Realistically however, this power comparison is rarely carried out since anecdotally, the univariate approach is by far the more common one utilized by health and behavioral researchers, regardless of any power advantage the multivariate model may have. C. Epsilon (e) When the assumption of sphericity is not met in univariate tests, Box (1954) showed that the degree to which this assumption is violated could be measured by the population parameter, epsilon (e). e ranges from 1.00 to l/(k-l) with a value of 1.00 representing perfect sphericity and l/(k-l) representing the greatest level of violation possible under k occasions. Since population values of e are rarely known, Greenhouse and Geisser (1959) suggested using the sample covariance matrix to approximate e and thus this estimate is often referred to as the G-G estimate, eA. Huynh and Feldt (1976) showed that eA was a biased estimate of e when e>.75 and they developed an alternate estimator (which takes into account sample size and the number of levels of the factors) which is referred to as the H-F estimator, e~. Potvin '96 23 Literature Review In the event that a RM ANOVA test exhibits nonsphericity, e or its estimates , eA and e~, are used to correct the degrees of freedom of the critical F ratio. The adjustment to the critical value is necessary since several researchers have shown that under a true null hypothesis, the type I error rate (chance of rejecting a truly nonsignificant effect) is inflated (Davidson, 1972; Eom ,1993; Greenhouse & Geisser, 1959; Huynh & Feldt, 1976; Mendoza et al., 1974). By multiplying the degrees of freedom of the F statistic by one of these correction factors, a test is rendered more conservative (the critical F is increased) thereby decreasing the chance of making a type I error. Since e, eA and e~ do not all produce identical values, the degree to which a test is adjusted is dependent on the value being used. As Muller and Barton (1989) explain, the uncorrected F test provides the least conservative adjustment possible, followed by e~, then eA and finally e-adjusted tests. In situations where maximum protection against type I errors under nonsphericity is required, Greenhouse and Geisser (1959) suggested using the most conservative adjustment possible, that involving multiplication by the lower limit of e, l/(k-l). Under conditions where e = 1 (sphericity), the correction factor is unnecessary. D. Power Calculations Under Different Conditions of Sphericity When e = 1, the uncorrected F follows an exact noncentral F distribution under a false null hypothesis or true effect (Muller & Barton, 1989). In this case correction using epsilon is not necessary and power can be calculated using those equations presented earlier in section I of this chapter. In the case where e * 1, the uncorrected F does not follow an exact noncentral F distribution and therefore the usual derivations of power for a RM test may be misleading. Here, the effects on power under conditions of nonsphericity are not as well defined as they are for type I error rates. Several researchers have found power of the uncorrected F test to be overestimated as e decreases (Marcucci, 1986; Muller & Barton, 1986) while others have shown it to be underestimated (Mendoza et al., 1974) due Potvin '96 24 Literature Review to an increase in the number of outliers of F that occur. A reason for the contrasts in power trends seen in these studies may be a reflection of the magnitude of effect size involved. That is, large effect sizes (e.g. large differences between treatment means) are likely to exhibit lower power estimates under nonsphericity due to a greater overlap between existing population distributions while small effect sizes (small differences between means) are more likely to result in larger power values than expected because of less overlap between distributions. Regardless of whether power is over or underestimated, adjustment to the univariate F statistic is necessary in order to rninimize inaccuracies. Muller and Barton (1989, 1991) and Muller et al. (1992) provided approximations of power for several adjusted F tests. An example of one of their methods for estimating power using the eA and e correction factors is as follows; First, the epsilon-adjusted critical value of any F test is found from a central (inverse) F distribution function, namely Fa** (E(eA)) « FINV [ 1 - a , df,*E(e*), &*E(e*) ] , where FINV [ 1 - a, dft*E(eA), df2*E(eA) J represents the value of the F statistic based on epsilon-adjusted numerator (df,) and derwminator (df2) degrees of freedom such that Pr{F£ F^} = 1 - a, the probability that F observed will be less than or equal to epsilon-adjusted F critical. Here, E E a 1 represents the expected estimate of eA which according to Muller and Barton (1989) is a better measure to use over sample eA for adjusting the degrees of freedom since they found it to improve the accuracy of power approximations under conditions of nonsphericity. Second, the noncentrality parameter (NCP) is calculated using the appropriate function from equations 2.00-2.04 and then multiplied by Box's e as follows, represents the long range average of e A from many sample estimates and can be approximated using formulae 2.16-2.19 of Muller & Barton (1989) Potvin '96 25 Literature Review NCP (e) = X* e. (224) Finally, power is computed from a noncentral F distribution function as, Power (TffeA;; - 1 - FPROB [ (E(eA)), df,*e, df2*e, Xe ] , where FPROB [ F^ (E(eA)) , df,*e, df2*e, Xe ] represents the noncentral F distribution function, namely Pr{F <. Fed,}, for a E(eA)-corrected noncentral F statistic based on e-adjusted numerator and denominator df and the e-adjusted noncentrality parameter. In addition to the e-adjusted functions above, Muller et al. (1992) provided power functions for all other corrected tests as well (see p. 1215 in their article). As they explained, the only difference among these functions is the way in which the critical value is determined. Potvin '96 26 Literature Review III. Power Determination For RM Designs A. Review of Past Work Over the last 60 years, both analytical and Monte Carlo simulation methods have been used to obtain power and sample size estimates for a variety of ANOVA designs. Work in this area has mostly focused on randornized group designs (Barcikowski & Holthouse, 1972; Borenstein & Cohen, 1988; Borich & Godbout, 1974; Cohen, 1969,1988; Kraemer & Thiemann, 1987; Koele, 1982; Pearson & Hartley, 1951; Rotton & Schonemann, 1978; Tang, 1938; Tiku, 1967) while efforts at providing estimates for those involving repeated measures have only been attempted in the last 25 years (Davidson, 1972; Grima, 1987; Marcucci, 1986; Mendoza et al., 1974; Muller & Barton, 1989; Mulvenon, 1993; Robey & Barcikowski, 1984). Part of the reason for this lag, despite the frequent use of RM designs in the behavioral and biological sciences (Edgington 1974), is due perhaps to challenges statisticians faced in deriving power formulae that could account for a correlation structure and the frequent conditions of nonsphericity associated with these designs. Despite these challenges however, methods for providing power estimates have been successfully implemented for the one way within-subjects and two way mixed models. One Way RM Designs The earliest efforts to approximate power for the RM ANOVA model involved those designs with a single group within-subjects variable. Davidson (1972) was among the first to provide power estimates for this design when he compared analytical approaches to power using univariate (uncorrected and conservative F) and multivariate (Hotelling's T2) methods. In his study, Davidson derived power values for designs involving a range of RM levels (3, 6 & 16), noncentrality parameters (.5 to 3.0) and sample Potvin '96 27 Literature Review sizes (4 to «») in which the covariance matrix either met or violated the assumption of compound symmetry. His findings revealed that when e=l, the uncorrected univariate test exhibited the greatest power but that the multivariate test approached an almost equal level as n increased. When e < 1, the multivariate test was found to be almost always more powerful than the e-adjusted F test except when effect size was large and in some cases when sample size was small. One of the first attempts to computerize procedures for estimating power in RM designs was conducted by Barcikowski (1973). He developed a computer program for calculating power of the one-way RM design through use of the multivariate (Hotelling's T^) statistic and in later years, with the collaborative efforts of another researcher, extended his program to include power o^ terrnination under the univariate model as well (Robey & Barcikowski, 1984). Their more recent program allowed input of treatment means, levels of repeated measures, several sample sizes (up to 20 at a time) and level of significance (a). It also provided an option for users to deterrnine the power of several univariate tests (uncorrected, e-adjusted and e~-adjusted F tests) under conditions of sphericity and nonsphericity. Unfortunately, these researchers did not provide detailed descriptions of the power computations involved in their methods. Similar to Davidson's (1972) study, Marcucci (1986) compared the power and type I error rate of univariate and multivariate tests for a single RM factor design under conditions of sphericity and nonsphericity. Power estimates were derived analytically using an approximation to the distribution of the F statistic and values for 3 competing tests (the uncorrected and Box's e-adjusted F tests and Hotelling's T^) involving either 3,4 or 5 repeated measures were provided over a range of covariance structures (e = 1.00, .98, .90 & .72), effect sizes (zero to high) and sample sizes (10 & 20). Results derived from the approximations were similar to those seen in the Davidson (1972) study with the conventional F test having the highest power when the assumption of sphericity was met As e decreased however, the multivariate method was shown to gain a substantial power advantage over the univariate tests under most Potvin '96 28 Literature Review conditions. In particular, the multivariate test was most sensitive in detecting small mean differences among highly correlated trials when the correlation between the other treatment conditions present was low. In addition to comparing values between tests, Marcucci (1986) also provided evidence supporting the accuracy of Iris power formulae. Within the same year Marcucci published Iris power formulae, Vonesh and Schork (1986) provided an analytical solution for deternuning sample size in the univariate (uncorrected) and multivariate (Hotelling's T2) analysis of single-sample repeated measurements. Sample sizes were derived and tabulated for power values of .8 and .9 and alpha levels of .01 and .05 under a range of conditions involving different effect sizes (Cohen's d = 1 to 3), minimum correlation values (0-.9) and repeated measurements (3-6). However, sample sizes were given only for the multivariate model and no attempt was made to compare power between different tests. Two Way ANOVA With OneRM Factor The two way mixed design and its associated multiple tests presented a more complex model for which to approximate power. Mendoza et al. (1974) were among the first to provide power estimates for this model. They used Monte Carlo simulation to compare power values between univariate and multivariate tests for the trial and interaction effects of a 3 (groups) by 4 (trials) design. Power and type 1 error estimates were computed for four univariate tests (conventional, e-, conservative e-, eA-corrected) and two multivariate tests (Hotelling's T2 for the trials effect and Roy's Largest Root criterion for the interaction effect) under conditions that either met or violated the assumptions of normality (normal or skewed distribution) and sphericity (e = 1, .5087 & .5365). Simulations were conducted for three different effect sizes (none, small and large) using a single sample size (9 per group) at alpha = .05. From their results, they concluded that the uncorrected F was the more powerful test under all conditions and effects when e = 1 but that the multivariate tests provided superior power over all univariate tests when e < 1. Potvin '96 29 Literature Review The only exception was for the multivariate interaction term which displayed less power when effect size was large. Skewed distributions were found to have little effect on the results. In a similar but unpublished study, Grima (1987) also exarnined power and type I error differences between univariate tests (uncorrected F and e-adjusted F) and multivariate tests (Hotelling's for trial effects and Wilk's, Hotelling-Lawley's, Pillai-Bartlett's and Roy's criterion for interaction effects) using Monte Carlo simulation on the same 3x4 design as Mendoza et al. (1974). Grima extended the work of her predecessors by generating values under varying variance-covariance structures that conformed either to CS, sphericity2 or muftisample circularity^ (eA =.98, .81 & .99, respectively). Small and moderate effect sizes were chosen (Cohen's/= .15 & .25) and a range of sample sizes selected (13-98 per group) so that power values obtained approximated fixed values of .75, .80 and .853. The conventional and corrected F tests were found to give greater power than the multivariate tests under assumptions of CS and sphericity but the difference between these tests decreased as sample size increased. Under some conditions of multisample sphericity for both the trials and interaction effects, several of the multivariate tests were observed to approach or even surpass the power of the univariate tests when effect size was small or moderate. Among the first analytical approaches to power for the two-way design was reported by Marcucci (1986) who, in addition to deriving formulae for the simple RM design also provided a solution for the mixed model as well. He illustrated how power for both the trials and interaction effects of the univariate and multivariate tests could be approximated using the same formulae for the one-way design with only minor substitution of expressions to X and error terms. Results from the application of these formulae however, were not presented or discussed as was done for the single factor model. 2 Grima (1987 pp. 52-68) used the term reducibility to describe sphericity and multisample sphericity to describe the condition where all groups of a particular design exhibit homogeneous spherical covariance matrices. In fact, the power estimates generated from this study were actually greater than expected theoretical values of .2, .5 & .8 since the latter were based on Cohen's tables (1977) for randomized group ANOVA. Potvin '96 30 Literature Review In an effort to improve the accuracy of power approximations under conditions of nonsphericity, Muller and Barton (1989, 1991) derived formulae for several corrected univariate tests of the mixed model. Their study was an extension of earlier work by Muller and Peterson (1984) who provided convenient power approximations for several multivariate tests. In the current study, power formulae were given for the uncorrected, the e-adjusted and e~-adjusted F tests. These authors proposed that the critical F of a particular test should be corrected using the 'expected value of the epsilon estimator" (i.e. E(eA) or E (e~)) rather than the sample eA or e~ usually used in significance testing, (see section II of this chapter for an example of procedures involved). Muller and Barton (1989) tested the accuracy of their approximations by comparing their computed values (uncorrected and eA tests only) with the simulated results from the Mendoza et al. (1974) study described earlier. They also compared their analytical values with results from their own Monte Carlo simulation for the interaction effect of a 3 x 4 design involving varying covariance structures (e = 1.00, .897, .814, .757, .533 &.532), sample sizes (N=15 & 30) and power estimates (.2, .5 & .8). They found their power approximations to be generally accurate (the largest absolute difference found was .052) but recommended using the eA-corrected test over the e~- adjusted one in future cases of nonsphericity since the former provided the most power while maximizing control of Type I error. In a follow-up of their own work, Muller et al. (1992) extended their power approximations to include Geisser-Greenhouse's conservative test as well. Using a case study involving a two way mixed model, they also demonstrated how their multivariate and univariate power equations could be used effectively to provide important information during the planning of a study. Examples of power estimates were presented for the interaction effect of the eA-adjusted test and Wilks's LR criterion under conditions involving different covariance matrices (e = 1.0 & .9), sample sizes (N= 100, 200 and 400), number of repeated measures (2 & 3) and effect sizes. Potvin '96 31 Literature Review As a continuation of Muller and Barton's (1989, 1991) work, Mulvenon (1993) provided four alternative methods for calculating the power of the univariate F test of the mixed model without the need of computing an expected value of epsilon (EeA), as required with existing formulae. Since in practice estimates of population values for the variance-covariance matrix are rarely known, Mulvenon suggested using sample values of I instead when computing power . His study investigated how the use of sample values affected the accuracy of four new formulae in estimating power. The equations involved included a modified version of the one derived by Muller and Barton (1989) as well as three others developed by Betz and Thompson (1990, unpublished). Results from these new equations were compared with those generated using the existing procedures recommended by Muller and Barton (1989, 1991). Through Monte Carlo simulation, mean power estimates were obtained for each formulae under conditions similar to those used by Muller and Barton (1989) as well a range of other design conditions involving different sample sizes (10, 20 & 30), e values (.35, .55. 75, .80, .85, .90 & .95), repeated measures levels (3, 5, 7, & 9) and fixed effect sizes (chosen to reflect power values of .2, .5 & .8). In most cases, the new formulae proved reasonably accurate at approximating power compared to the existing method with the modified Muller and Barton (1989,1991) equation appearing to be the most reliable of the four. Accuracy was found to improve with greater sample sizes and decrease with higher numbers of repeated measures while the degree of nonsphericity (e) had little effect on their reliability. Methods have also been conducted for estimating sample size in the two-way mixed design. Rochon (1991) extended the sample size formulae of Vonesh and Schork (1986) to include a between-subjects variable. However, unlike his predecessors, he provided analytical procedures for only the multivariate test (Hotelling's 7% He also accounted for the correlation structure of the RM variable by deriving formulae for different patterns of the correlation matrix (compound symmetry and autoregression). Tables of sample size were provided for different hypotheses (multivariate, group main Potvin '96 32 Literature Review effect and group by time interaction) under varying conditions of effect size (Cohen's d=A, .3, .5, .7, .9, 1.1), correlation (0 to .9), covariance pattern and repeated measures (3, 5, 7, 9). All sample sizes presented corresponded to a power value of .8 and alpha level of .05. Other research efforts related to sample size include works by Bloch (1986) and Lui and Cumberland (1992). These researchers also provided sample size formulae for the two-way mixed model but unlike Rochon (1991), used a univariate technique instead. Unfortunately, their solutions only applied to the between-subjects main effect and did not include the other hypotheses (RM main effect and interactibn). ANOVA Designs With One Repeated Measures Variable and Two or More Between-Subject Variables Although no published work dealing directly with power solutions for ANOVA designs having two or more randomized group factors and a single within-subjects variable appears in the literature, present analytical methods for the two-way mixed model can be extended to these designs as well. Winer et al. (1991) showed how the numerator and denominator terms for the different tests of a k-way rnixed model remain almost the same using the univariate approach. The only exception being an increase in the number of degrees of freedom + one variables included in the effect size and noncentrality parameter formulae given in equations 2.12-2.17 (one extra variable for each new factor added) and the need to calculate power for higher order interactions (which parallels procedures for simple interactions) . Therefore, by extending existing principles of the two-way rnixed ANOVA, power deterrnination for these designs under the univariate model is possible. Summary of Previous Work From the review presented, it seems evident that a variety of univariate and multivariate methods now exist for estimating power or sample size of single and mixed RM ANOVA designs. The choice of Potvin '96 33 Literature Review whether to use a univariate or multivariate technique seems dependent on whether certain conditions of the correlation matrix are met Generally, when evidence of sphericity exists, the univariate approach is more powerful and therefore the better choice. However, when the assumptions of sphericity are severely violated, the multivariate model generally shows a power advantage and thus is the preferred analysis. Table 2.1 provides a summary of all relevant studies related to power and sample size estimation among RM designs discussed. Apparent from this review and Table 2.1 and of particular relevance to this proposed study, is the lack of methods currendy available for estimating power or sample size among those designs involving multiple RM factors. B. Computer Programs Available Despite the availability of analytical solutions for determining power of some RM designs, the rather complex and often tedious nature of procedures involved frequently discourage their use among investigators. Over the years, computer programs have been developed to expedite the process of power analysis and make it more appealing to researchers. Many of the programs available today are the result of experimenters' own efforts to incorporate and improve the application of their own or colleagues approximations. Unfortunately, the majority of these programs involve crude FORTRAN-type subroutines requiring a certain amount of computer language and progranirning knowledge which limits their use among the less computer-literate users. In addition, accessibility to some of these programs is often difficult Recent commercial products have emerged that offer a more accessible and user-friendly approach to power determination in RM designs. Borenstein and Cohen's (1988) power program , although not designed to provide estimates for RM ANOVA, can be modified to do so by manually calculating the appropriate effect size (f) and degrees of freedom for the RM test, and using these values in place of those Potvin '96 34 Literature Review S z - o •S- »• IB 3 on CJ •s s-s co r-ia <u u S3 § o h •a On Refe Davi Bare > i . 8. I 5 S3 a. 2 <u u .o J O o o Hi u a. S H K 5 <u <u O Q Q •c Q 1 s, Ln a 1 co o CN 2 oi en o -is O w O Q & i£ i5 & OH OH OH PH J2 1 CQ oo t» •e % 0 1 I ft. '9 ^  « 1 > > i g I I 2 Is J 1 I-a. > 3 Q S Q H co co co" co" CS CO & L& ^ j5 CO >S O H C U O H O H O H <*> PL , m •S3 111 Potvin '96 35 Literature Review for RG ANOVA. Programs that offer more direct methods of power estimation for one-way RM and two-way mixed designs include PASS (1991), SOLO (1992), DATASIM (1989) and Robey and Barcikowski (1984). DATASIM, a program developed by Bradley (1989), can deterrnine power directty using a cumulative distribution function or mdirectly using Monte Carlo simulation for both the one-way RM and two-way mixed designs. The software, however, is limited since power under conditions of nonsphericity can only be estimated through simulation (and then again, only for the one-way design) while direct computations of power require the user to calculate A, manually. PASS and SOLO are actually identical programs developed by the same author but distributed through two separate companies (Number Cruncher Statistical System (NCSS) & BMDP, respectively). Although these programs apparently allow direct computation of power for RM designs, inaccuracy of results obtained on test trials by the authors of this project has left doubts concerning the appropriateness of formulae involved. In addition, neither program accommodates for differences in the patterns of covariance matrices and therefore, accurate power estimates under conditions of nonsphericity are unobtainable. Although Robey and Barcikowski's (1984) program can compute power values under conditions of sphericity and nonsphericity, their program is limited to just the single group within-subjects model as described earlier. For purposes of comparison, the characteristics of these programs have been summarized in Table 2.2. Potvin '96 36 Literature Review « H u s n. a iS o e o CS a S o U Monte Carlo Capability cn Ui CO <L> <U O -^t £>H o o o Z Z Z o o o Z Z Z o o o Z Z Z Accounts For Cov Structure CS cn O O o o z z • o o Z z • Yes Tests Power Given For c a , c e . P P ' Un, G-G, H-F, T2 Power Calculations « M O > H > H Z m co M cn o £ £ z S S3 £ $H > H Z H I Designs 1 WayRM 2 Way Mixed RMxRM 1 WayRM 2 Way Mixed RMxRM 1 WayRM 2 Way Mixed RMxRM 1 WayRM 2 Way Mixed RMxRM Software Datasim (Bradley '89) Statistical Power Analysis (Borenstein & Cohen '88) Solo/PASS '92 Robey & Barcikowski '84 5 * g s Potvin '96 37 Literature Review Study Expectations Based on the relationship between power and certain experimental parameters, the following power trends were expected: > As sample size increased, power for all tests of a given design would increase due to an increase in the numerator of the F ratio, all other factors held constant > As effect size increased, power for all tests of a design would increase due to an increase in the numerator of the F ratio, all other factors held constant > As the level of significance increased from .01 to .10, power for all tests of a given design would increase due to a reduction in the critical F ratio of the central F distribution, all other factors held constant > As the average correlation among trials increased for those designs involving a single repeated measures factor, power for the interaction and within-subjects main effects tests would increase due to a reduction in the error term of the F ratio, all other factors held constant. In contrast, the power for the between-subjects (group) main effects test of a two-way rnixed ANOVA would actually decrease with an increase in the average correlation among trials due to a larger denominator in the F ratio, all other factors held constant > Under conditions involving trend correlation matrix patterns (sphericity severely violated), power values obtained would be either greater or less than those under constant correlation matrix patterns due to an increase in the variability of the F ratio. Potvin '96 38 Literature Review Hypotheses4 In addition to the expectations listed, it was hypothesized that for two-way ANOVA designs in which both factors involve repeated measures (A & B): Main Effects > As the average correlation among trials of factor B decreased, the power of the A main effect would increase due to a reduction in the error term of the F ratio, all other conditions held constant > As the number of repeated measures (levels) of factor B increased, the power of the A main effect would increase due to an increase in the F ratio, all other conditions held constant Interaction > As the average correlation among trials of both factors increased, in general, the power of the interaction effect would increase due to a reduction in the error term of the F ratio, all other conditions held constant In addition, for those conditions in which unequal average correlations exist between the two factors: > As the number of repeated measures for the factor with the higher average correlation increased, the power of the interaction effect would increase due to an increase in the F ratio, all other conditions held constant > As the number of repeated measures for the factor with the lower average correlation increased, the power of the interaction effect would decrease due to a reduction in the F ratio, all other conditions held constant. 4 Based on pilot project results. Potvin '96 39 Methodology Chapter Three Methodology Potvin '96 40 Methodology A description of the methodology used in this investigation has been organized into three separate sections. The first section outlines the process by which experimental parameters and ANOVA designs were chosen for use in the power analysis of this project. The second section describes the computer programs utilized to generate power values for the different designs involved. The last section provides a brief description of the methods used to verify the reliability of these power programs. I. RM ANOVA Designs and Experimental Conditions A. Collecting Empirical Data In order to generate power values reflective of those experimental parameters common to the field of human kinetics, efforts were made to collect empirical data from several disciplines within the field. Research databases common to the area (SportsDiscus, Med-line) were searched for relevant studies involving univariate RM ANOVA designs and any one of the following dependent variables: oxygen consumption (lAnin or ml x kg1 x min'), torque (NM or ft-lb.), reaction time (msec) and acquisition and retention scores. These dependent variables were chosen to encompass different disciplines in the field of human kinetics and to provide a wide variety of parameters (i.e. effect size, magnitude and pattern of correlation, RM levels) for power analysis. They were also selected out of familiarity and interest to the researcher. A total of 49 independent studies meeting these criteria were identified and letters were men sent to the investigators requesting access to their raw data (a sample copy of the forwarded letter is given in appendix 1.0). Of the 45 letters sent (4 studies were conducted by the same author), only 9 responses were received and of these, only 2 authors provided data which could be used in this study. Since this fell short of the number of data sets desired (15-20), dissertations from the School of Human Kinetics were also searched and restrictions on the type of dependent variable involved dropped in an attempt to increase Potvin '96 41 Methodology the amount obtained. This raised the total number of usable data sets to 21s, which met the researcher's original objective. B. Selecting R M Designs and Experimental Conditions The RM ANOVA designs used in this study included the one-way RM, the two-way rnixed (2 (group) x K (RM)) and the two-way within-subjects (3(RM) x K(RM)) models. These designs were chosen due to their frequent occurrence in the field. The 3 x K design in particular, was selected to test the specific hypotheses of this study. Empirical data collected from the studies were used to determine mean and range values of effect size (ES), sample size (n), average correlation of a given matrix (Ave r) and sample epsilon (e) for each RM design involved. These values were then used to establish the experimental parameters for which power would be generated (see appendix 2.1-2.3 for a list of both the raw and mean values of these statistics for all studies selected). With regards to effect size, values of .2, .5 and .8 were determined to be accurate representations of small, medium and large effects among studies chosen, respectively. These were equivalent to values proposed by Cohen (1988) for behavioral science data. Regarding Ave r, .4 was found to be reflective of a moderately-low correlation while .8 was considered to be representative of a moderately-high correlation among repeated trials exarriined. The range of epsilon values observed (.3347-1.000) provided verification of the different degrees of heterogeneity (violations of sphericity) that exists among r matrices orjtained from human kinetics data. Although sample sizes of 5 to 30 and RM levels of 3,6 and 9 were pre-selected prior to the commencement of this study, information derived from the data sets and 49 studies examined supported the prevalence of these values in the disciplines of exercise science. 5 Some studies involved several dependent variables and thus provided more than one data set Potvin '96 42 Methodology Thus, for a given design, power was estimated under a variety of experimental conditions involving three different effect sizes (.2, .5 and .8), three levels of K (3,6, and 9), two Ave r values (.4 and .8), two r matrix patterns (constant, e = 1.000 and trend, e < .560), six sample sizes (5,10, 15, 20, 25 and 30) and three levels of significance (.01, .05 and .10). Due to time restraints, power under trend matrix patterns was limited only to the one-way model. Figure 3.01 provides a flow chart of the various conditions power was determined for among each of the three ANOVA designs involved. Potvin '96 43 Methodology One Wav R M A N O V A r matrix Trials Alpha ES Aver Pattern n 30 3 x 3 x 3 x 2 x 2 x 6 = 648 conditions Two Wav Mixed A N O V A Trials Alpha 3 0.01 (5) < ^ Ave r Mod. Low (.4) Mod. High (.8) r matrix Pattern Constant Large (.8) 1 x 6 = 324 conditions per test Two Wav R M A N O V A Trials Alpha 3 0.01 ES Small (.2) Aver f l Mod. Low (.4) Mod. High (.8) Large (.8) Ave rB Mod. Low (.4) Mod. High (.8) 2 x 6 = 648 conditions per test #of Conditions # of Tests Conditions per Design One-Way RM ANOVA: 1 X 648 648 Two-Way Mixed ANOVA: 3 X 324 972 Two-Way RM ANOVA: 3 X 648 1944 Total Conditions Involved 3564 Figure 3.01. Experimental conditions of R M A N O V A designs power was generated for in this study. (ES = effect size; Ave r = average correlation among trials; n = sample size: Test = Main effects and interaction). Potvin '96" 44 Methodology II. Power Determination A. Analytical For those conditions of the one-way RM and two-way mixed designs conforming to the assumptions of sphericity (constant r matrix pattern), power was calculated directly using analytical methods. Using equations 2.01-2.04 (see chapter 2), a FORTRAN program was first developed by the researcher in order to compute the noncentrality parameter (X) for each unique condition of a design (appendix 3.0 includes a copy of this program). By inputting the appropriate treatment means and variances, the FORTRAN program produced X values for any given Ave r, n and effect size (d). When calculating X, treatment means and variances were selected to ensure effect sizes conformed to d values of .2, .5 and .8 for all tests of a design (see appendix 4.1-4.3 for all test means and variances used in this study). In all cases except tests of interaction, definitions of d were similar to those given by Cohen (1988) and equation 2.10 of this thesis. Since no expressions of d for interaction tests could be found in the literature, equations had to be developed by the researcher of this study. These functions are given in appendix 5.0. Once A. values were obtained, these were then used to compute exact power for different levels of a by inputting them into a cumulative distribution function. DATASIM, a statistical software program developed by Bradley (1989), was used for this purpose. B. Monte Carlo Simulation For one-way RM designs under conditions of nonshpericity (i.e. trend r matrix patterns) and two-way RM designs, Monte Carlo (M-C) simulation was used to estimate power. Simulation programs developed by Eom (1993) were adapted for this task. These FORTRAN programs (one for each RM design) provided estimates of power through an iterative process. This entailed generation of a database of Potvin *96 45 Methodology random numbers (120,000) based on specific population parameters from which random samples of data were then repeatedly drawn and subjected to the appropriate RM ANOVA test. Power was determined by totaling the number of F tests found significant for a given a and then dividing by the overall number of tests performed for the simulatioa Prior to each simulation run, the program required the user to input the treatment means, the variance-covariance structure and all sets of orthogonal contrasts of a given design. A single run, therefore, produced power values for one specific effect size and correlation matrix and all sample sizes and levels of alpha of a particular RM design. To ensure reasonable accuracy of power estimates, the number of replicated tests the Monte Carlo programs performed per simulation was set at 3000. Although this was rather on the low side for a Monte Carlo investigation, the vast number of conditions which were to be simulated in this study necessitated that this number remain at a manageable level. With replications set at 3000, the standard error of proportion6 (SEP) for true power values of .50 and .99 was + 009 and ±.002, respectively . With 95% confidence therefore, the accuracy of power values derived from the Monte Carlo simulations of this study was expected to be about ±.02 for tests with moderate power (.50) and ±.004 for those exhibiting power at the extremes (.99 or .01). Table 3.1 summarizes the power methods used for each design of this study. where p = true power and n = number of replications. Potvin '96 46 Methodology Table 3.1 Methods and Computer Programs Used to Determine Power For the Different Repeated Measures (RM) ANOVA Designs of This Study. ANOVA Design Programs Used Method Experimental Conditions One Way RM NCP(1996) - Datasim(1989) Bom (1993) Analytic Monte Carlo* Only spherical A l l nonspherical Two-Way Mixed NCP(1996) - Datasim(1989) Analytic Only spherical Two-Way RM Eom(1993) Monte Carlo* Only spherical NCP = FORTRAN program for calculating the Noncentrality Parameter * number of replications = 3000 III. Accuracy and Reliability of Power Estimates Several measures were undertaken to verify the accuracy of methods used in this study to approximate power. For Monte Carlo programs, three procedures were employed. First, in order to ensure the M-C routines were computing correcfly, ANOVA test results generated from these programs were compared to those computed from a well known statistical analysis package (BMDP). This was done by extracting test data generated by the simulation program and then subjecting them to the appropriate statistical procedure(s) using BMDP. For all conditions exarnined, ANOVA statistics (mean sum of square values, F ratios) were identical between the two programs. A second method for assessing the accuracy of the M-C programs involved comparing power values from the simulation program to those given by DATASIM under identical conditions of a one-way design. A total of 162 conditions were involved in which absolute differences in power between M-C and Potvin '96 47 Methodology DATASIM were determined for designs with 3,6 and 9 RM. The mean absolute difference (MAD) among conditions was .0071 or .71%. According to Muller and Barton (1989), absolute differences equal to or below .04 (4.0%) are sufficient for power purposes. Of the 162 differences, none were above this value and only 4 were above .025 (2.5%). Although there was a trend for the simulation program to underestimate power at extreme ends of the power curve and overestimate at moderate power levels (when compared to DATASIM values), these tendencies were slight It was therefore concluded that both programs produced similar power estimates. A third procedure for testing the reliability of the M-C programs was conducted by replicating simulation runs under identical experimental parameters. Simulations were repeated ten and four times for several conditions of the one and two-way RM designs, respectively, in which only the random number generator seed (i.e. the number used to initiate the data generation process) was changed. For the one-way design, 95% confidence intervals were determined for each condition involved using DATASIM values as true power and the number of M-C estimates falling outside of these intervals was established. Of the 180 power values generated, 27 (15%) fell outside of their respective intervals, the majority (21) occurring when true power was between .25 and .64. Although this was higher than expected (with 95% confidence, only 5% should occur), the range among power values generated for a given condition rarely surpassed .03 and none of the outliers had absolute differences (from those of DATASIM) above .04. For the two-way program, confidence intervals could not be established since true power values were unknown. However, the largest range among power values from repeated simulations of a given condition was .025. Based on this and other pilot results, the M-C programs were considered reliable and capable of providing accurate estimates of true power. In order to validate the accuracy of power estimates generated by the DATASIM program, values computed by this program following calculation of the noncentrality parameter were compared to those given by Davidson (1972) for one-way RM designs with 3, 6 and 16 repeated trials. Of 36 power Potvin '96 48 Methodology comparisons made, only two conditions (6%) produced absolute differences greater than .04. MAD values for the designs with 3, 6 and 16 RM were .006, .026 and .003, respectively. Thus the analytical methods used to calculate power in this study were generally found to produce estimates equivalent to those reportedelse where. Delimitations of Study Power determination for this study was delimited; > to the univariate RM ANOVA approach. > to those designs and experimental conditions selected. > toeffect size conditions where population means were equally spaced apart > to equal sample sizes and number of observations for all groups and trials involved. > to correlation coefficients equal either to . 4 or .8 for AB pairs in a two-way RM design. > to a trend or simplex r pattern for all matrices not corrforrning to sphericity. > to severe violations of sphericity among trend r structures (e <.56). > by the expressions of effect size (d) used in this study. > by the accuracy of the computer and Monte Carlo simulation programs used. Potvin '96 49 Chapter Four Results Potvin '96 Results The results of this study have been organized according to the type of design involved. Data for the one-way RM ANOVA design are presented first, followed by those for the two-way design with multiple repeated measures while results for the two-way mixed ANOVA (group by trials) are given last I. One-Way Repeated Measures ANOVA A. Power Tables Power values for the one-way RM ANOVA with 3,6 and 9 repeated measures are given in Tables 4.1,4.2 and 4.3 respectively. Each table provides power for the different levels of alpha, effect size (ES), sample size (n), average correlation coefficients (Ave r) and patterns of correlation matrices (C and T) involved. Power values derived using DATASIM are under the "C" column (constant correlation matrix) while values estimated using Monte Carlo simulation are under the "T" column (trend correlation matrix). The description of power trends which follows may be clarified by referring to these tables and appropriate figures when indicated. Potvin '96 51 Results Table 4.1 PowrFor a One-Way Repeated Measures ANOVA With3 Levels Alpha = .01 ES: Small (.20) Medium (.50) Lain e(.80) Ave n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T C T C T C T n 5 01 03 02 05 03 07 07 13 06 11 23 29 10 02 04 03 06 07 10 25 29 21 23 73 66 15 02 05 05 08 12 16 47 45 39 39 95 88 20 03 05 07 10 18 22 67 61 57 53 99 96 25 03 06 10 13 24 26 82 72 73 66 100 99 30 04 07 12 16 32 32 91 82 84 77 100 100 Alpha = .05 ES: Small (.20) Medium (30) Large (.80) Ave n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T C T C T C T n 5 06 09 08 11 11 15 24 27 21 25 54 53 10 07 11 12 15 20 22 52 50 45 43 93 83 15 08 11 16 19 29 30 74 65 67 60 99 95 20 10 13 20 21 39 38 88 77 82 73 100 99 25 11 14 25 26 48 44 95 86 91 83 100 100 30 13 16 30 30 57 51 98 92 % 90 100 100 Alpha = .10 ES: Small (20) Medium (JO) Large (.80) Ave r. 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T C T C T c T n -5 11 16 14 17 19 22 37 36 33 35 71 63 10 13 16 20 22 31 30 67 59 60 54 97 89 15 15 17 26 26 42 39 85 74 79 71 100 97 20 17 18 31 29 53 48 94 85 90 82 100 100 25 19 21 37 34 62 54 98 91 96 91 100 100 30 21 22 42 38 70 61 99 95 98 95 100 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size T = Trend Correlation Matrix Pattern (e < .56) r = Average of Correlation Coefficients in a given Matrix All power values are in percent Potvin "96 52 Results Table 4.2 Power For a One-Way Repeated Measures ANOVA With 6 Levels Alpha = .01 ES: Small (.20) Medium (50) Large (.80) Ave n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T C T C T C T n 5 01 04 02 05 03 07 08 14 07 13 31 35 10 02 05 03 08 06 12 29 33 23 26 83 68 15 02 05 05 10 12 18 53 50 44 41 98 87 20 02 06 07 12 18 23 74 62 64 54 100 96 25 03 07 09 15 26 29 88 74 79 66 100 99 30 03 08 12 17 34 34 95 82 89 75 100 100 Alpha = .05 ES: Small (20) Medium (£0) Large (.80) Ave n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T C T C T C T n 5 06 10 08 12 11 14 25 27 21 25 60 51 10 07 11 11 16 19 22 54 48 47 42 96 81 15 08 12 15 19 29 31 78 66 69 58 100 94 20 09 13 20 22 39 ,37 91 77 85 71 100 98 25 10 14 24 26 49 44 97 85 93 80 100 99 30 12 14 29 29 58 50 99 90 97 86 100 100 Alpha = .10 ES: ? Small (20) Medium (SO) Large (.80) Aver 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T C T C T c T n 5 11 16 14 18 19 21 37 36 33 33 74 62 10 13 17 19 22 30 30 68 58 61 52 98 86 15 15 17 25 26 42 39 87 73 81 67 100 96 20 16 19 30 28 53 45 96 82 92 77 100 99 25 18 20 36 33 62 54 99 89 97 86 100 100 30 20 20 41 36 71 60 100 93 99 90 100 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size T = Trend Correlation Matrix Pattern (e < .56) r = Average of Correlation Coefficients in a given Matrix All power values are in percent Potvin "96 53 Results Table 4.3 Power For a One-Way Repeated Measures ANOVA With 9 Levels Alpha = .01 ES: Small (20) Medium (SO) Large (.80) Ave n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T C T C T C T n 5 01 04 02 06 03 08 10 18 08 14 40 40 10 02 05 03 08 07 15 36 37 28 31 92 76 15 02 06 05 12 14 20 64 54 53 45 100 93 20 02 08 07 14 22 26 84 69 74 60 100 98 25 03 08 10 17 31 33 94 79 88 70 100 100 30 03 09 14 20 40 38 98 87 95 80 100 100 Alpha = .05 ES: Small (20) Medium (50) Large (.80) Ave n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T C T C T c T n 5 06 11 08 13 11 16 28 31 24 26 68 57 10 07 12 12 16 21 26 61 52 53 46 98 87 15 08 14 16 21 32 33 84 69 77 61 100 97 20 09 15 21 25 44 41 95 81 90 74 100 100 25 11 15 26 29 55 48 99 89 97 82 100 100 30 12 16 32 32 65 54 100 93 99 89 100 100 Alpha = .10 ES: Small (20) Medium (50) Larg e(.80) Ave n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C T C T C T c T C T G T n 5 11 17 14 19 19 23 41 39 36 35 80 66 10 13 17 20 23 32 33 74 59 66 54 99 91 15 15 19 26 27 45 41 91 76 86 69 100 98 20 17 21 32 32 57 49 98 86 95 81 100 100 25 19 21 38 36 68 56 100 93 99 87 100 100 30 21 21 45 39 77 62 100 96 100 92 100 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size T = Trend Correlation Matrix Pattern (e < .56) r = Average of Correlation Coefficients in a given Matrix All power values are in percent Potvin '96 54 Results B. Power Trends Alpha (a) As expected and observed from each table, power for any given ES, n, Ave r and matrix pattern increases as the level of significance increases. Power is lowest when alpha is set to .01 and highest when set to .10. Sample Size (ri) From the power tables, we see that an increase in sample size, as predicted, brings about an increase in power when all other conditions for a particular design are held constant. Interestingly, when sample size is small (n = 5), power is, at best, only moderately high (.80 for K = 9 under large ES and high Ave r). This exemplifies the importance of having a sufficient n to achieve a reasonable amount of power when other factors in a study's design are less than optimal. Effect Size (ES) Also from the power tables, we see that when effect size increases for any given a, n, Ave r and matrix pattern of a particular design, power increases. The degree of influence ES has on the power of an experimental design is demonstrated by observing how maximum power for those conditions with small ES is, at best, moderately-low (.45) but increases considerably when a medium ES is involved (1.00). Average Correlation (Ave r) In agreement with the researcher's expectations, the power tables demonstrate that when the Ave r for a given design increases, power increases at any given a, n, ES and r matrix pattern,. In addition, Potvin '96 55 Results when the Ave r is moderately low (.4), a large ES and/or n is required to achieve a high degree of power (> .85). In contrast, when the Ave r is moderately high (.8) a medium ES and only moderate sample size (n = 20) is required to obtain high power. This illustrates the substantial effect correlation has on power in RM ANOVA designs. Pattern of Correlation Matrix Unlike those factors previously discussed, the pattern of coefficients within a correlation matrix (constant or trend) does not have a common effect on the power of a one-way RM ANOVA at any given n, a, ES, Ave r and RM level (K). From the power tables, it can be observed that when a design involving a constant correlation matrix pattern (C) has low power (< .20), regardless of what statistical parameters are involved, power for that same design under a trend correlation matrix pattern (T) will be slightly greater (largest differences .06-.08). In contrast, when a design under C has moderate to high power (.50-.99), that same design under T appears to have less power (largest differences .16-. 18). For example, a design with 6 RM levels, small effect size, and moderately low Ave r (.4) will have a power of .03 at n = 30 and a = .01 under C and a power of .08 under T. On the other hand, when effect size is large and all other conditions are held constant for that same design, power under C is .89 compared to .75 under T. Figure 4.01 illustrates how these power differences between constant and trend matrices change as effect size is increased. Also evident from the tables and this figure is that differences in power between tests Potvin *96 56 Results Large Effect Size (.8) Constant r Trend r Sample Size Figure 4.01. Comparisons of power between one-way repeated measures ANOVA (K = 6) with constant and trend correlation matrices under varying effect sizes. Note: All design conditions were based on ave r = .4 and alpha = .05. Numbers within graphs represent differences In power. Potvin '96 57 Results with C and T are lowest when statistical power is within the range of .20 to .40. In addition, the point within this range at which power becomes equal appears to be influenced by the magnitude of other experimental parameters. Generally, when a design has a small effect size and low Ave r, the range at which power for designs with C and T becomes equal is between .20 and .25. Likewise when ES is moderate and Ave r = .4, power between designs with C and T is equal at about .30 to .35 while those designs with large ES and high Ave r produce equal power at about .35 to .40. Once power surpasses .40, regardless of the conditions involved, designs demonstrating a high degree of nonsphericity (low epsilon) result in less power and do not equal the power of spherical (high epsilon) designs again until power approaches 1.00. Another interesting observation is how this magnitude of difference in power between designs with C and T changes across designs with different levels of repeated measures. As Figure 4.02 illustrates, when power of a design under C is very low (<.20), the power advantage tests with T have over those with C tends to become slightly larger as the number of repeated measures in a design increases. Therefore, a design with 9 repeated measures under T has greater power over one with 6 which in turn has greater power over one with 3 levels under identical conditions. Similarly, when a design under C has moderate to high power (.50-.90), the reduction in power for those same designs under T tends to be slightly greater for K = 9 and K = 6 than for K = 3 (Figure 4.03). This power difference between tests with C and T and different K will be discussed further in the following section. Potvin '96 58 Results 10 15 20 Sample Size ^ ~ Trend r (K=9) Trend r (K=6) « - Trend r (K=3) > Constant r (K3=K6=K9) Figure 4.02. Comparisons of power for one-way ANOVA designs with 3, 6, and 9 repeated measures (RM) under constant and trend correlation matrices and small effect size (.2). Note: All design conditions were based on ave r = .8 and alpha = .01. Potvin '96 59 Results K = 3 5 10 15 20 25 30 Sample Size K = 9 4® " Trend r • Constant r 10 15 20 Sample Size Figure 4.03. Comparisons of power for one-way ANOVA designs with 3, 6, and 9 repeated measures (RM) under constant and trend correlation matrices and large effect size (.8). Note: All design conditions were based on ave r = .4 and alpha = .10. Numbers within graphs represent differences In power. Potvin '96 60 Results Number of Repeated Measures (K.) When comparing power values between one-way ANOVA tests with different levels of repeated measures (that is, making comparisons between tables 4.1 (K = 3), 4.2 (K = 6) and 4.3 (K = 9)), it becomes apparent that the effects of an increase in the levels of repeated measures (K) on power are dependent on the effect size and pattern of the correlation matrix involved. In general, for tests with C, an increase in K results in very little difference in power between designs when effect size is small and other factors are held constant. However at medium and large effect sizes (.5 and .8 respectively), those designs with a greater number of RM exert a power advantage over those with fewer RM levels across most n, a and Ave r with maximum differences as high as .14-. 19. Referring to the top graph in Figure 4.04, it is evident that a one-way design with 9 repeated measures has greater power over a design with K = 6 or K = 3 levels when effect size is large. Likewise a test with 6 levels shows superior power over one with 3 at the same effect size. However, as effect size decreases we see the power advantage gained from having a greater number of RM levels declines to the point where little apparent difference exists between the three tests at small effect size. Interestingly, an almost opposite effect seems to occur under T. Looking at power values under a trend r matrix or referring to the middle graph in Figure 4.04, we see that at a small effect size, slightly greater power exists for tests with higher levels of K when a = .01 but this power advantage is reduced as a, n and/or Ave r increases (the latter two factors are not shown in the graph). In addition, as effect size becomes large, we see that a test with K = 3 levels demonstrates greater power than one with K = 6 and as much if not greater power than one with 9 RM at both levels of alpha (middle and bottom graphs of Figure 4.04). Thus when power is generally high among designs (large ES), the tendency for power to increase with a greater number of repeated measures, as was observed under C appears to be reduced or Potvin *96 61 Results Constant r, alpha = .01 Trend r, alpha = .01 3 6 9 K Trend r, alpha = .10 3 6 g K Figure 4.04. Power for one-way ANOVA designs with 3, 6, and 9 repeated measures under varying correlation matrices, effect sizes and alpha. Note: All designs based on ave r = .4 and n - 30. r = correlation Potvin '96 62 Results lost entirely when the assumptions of sphericity are severely violated (e < .56). Likewise, when power is generally low (small ES), the rather constant values observed among designs with varying K under C, seems to give way to a power advantage in favor of designs with a greater number of repeated measures under nonsphericity. n. Two-Way Repeated Measures ANOVA A. Power Tables Power values for 3 x 3 , 3 x 6 and 3x9 ANOVA designs with two repeated measures factors are given in tables 4.4a-c, 4.5a-c and 4.6a-c, respectively. Within each table, power is provided for the different levels of alpha, effect size, sample size and average correlation coefficients of RM factors involved. Each column within a given ES and a represent power values for one of the four correlation (AB) matrices examined under the two-way RM design (Ave r = .4 for A and B; Ave r = .4 for A, Ave r = .8 for B; Ave r = .8 for A, Ave r = .4 for B; Ave r = .8 for A and B). For comparison, the overall average r of each matrix is also given. Power values were generated under constant r matrix patterns (assumptions of sphericity met) using Monte Carlo procedures. Means, standard deviations and complete matrices are given in appendices 4.1-4.3 and 6.1-6.2, respectively. The description of power trends which follows may be clarified by referring to these tables and appropriate figures when indicated. Potvin '96 63 Results Table 4.4a Power of the A Main Effect For a 3(A) x 3(B) ANOVA With Repeated Measures on Two Factors. Test: ES: r for A: r forB: Overall r: 0.40 Main Effect of Factor A (3 levels) Alpha = .01 Small (.20) OA 0.4 0.8 0.50 0.8 0.4 0.50 0.8 0.80 Medium (SO) 0.4 0.4 I 0.8 0.40 0.50 0.8 0.4 I 0.8 0.50 0.80 Large (.80) 0.4 0.4 | 0.8 0.40 0.50 0.8 0.4 I 0.8 0.50 n 5 10 15 20 25 30 02 04 05 08 11 13 01 02 03 04 04 05 04 10 19 30 41 51 03 10 19 30 41 50 08 25 48 66 81 89 03 09 16 25 36 45 27 83 98 100 100 100 28 81 98 100 100 100 23 74 95 99 100 100 08 30 54 73 85 93 75 100 100 100 100 100 Alpha = .05 ES: Small (20) Medium (50) Large (.80) r for A: 04 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OA 0.4 0.8 0.4 OX 0.4 0.8 0.4 0.8 OA 0.8 Overall r: 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 n 5 08 06 15 12 23 12 61 61 53 25 95 96 10 12 08 27 28 53 26 95 96 92 58 100 100 15 16 10 42 42 73 38 100 100 99 79 100 100 20 21 12 56 55 87 50 100 100 100 90 100 100 25 27 13 67 65 94 61 100 100 100 95 100 100 30 31 16 75 75 97 69 100 100 100 99 100 100 Alpha = .10 ES: Small (.20) Medium (50) Large (.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: OA 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 n 5 14 11 25 23 37 22 76 77 69 39 99 99 10 20 15 40 41 67 37 98 98 96 71 100 100 15 26 17 56 56 83 52 100 100 100 88 100 100 20 33 20 68 67 93 64 100 100 100 95 100 100 25 38 22 78 76 97 73 100 100 100 98 100 100 30 44 25 85 84 99 80 100 100 100 99 100 100 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group r coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in per Potvin '96 64 Results Table 4.4b Power of the B Main Effect For a 3(A) x 3(B) ANOVA With Repeated Measures on Two Factors. Test: Main Effect of Factor B (3 levels) Alpha = .01 ES: Small (.20) Medium (50) Large (-80) r for A: 0.4 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 n 5 02 05 01 05 09 33 04 33 30 81 11 81 10 05 15 03 15 34 88 14 88 82 too 38 100 15 08 27 05 27 61 99 25 99 98 100 64 100 20 11 39 06 39 77 100 35 100 100 100 81 100 25 15 51 07 . 50 88 100 47 100 100 too 91 100 30 19 61 08 61 94 100 56 100 100 100 96 100 Alpha = .05 ES: r for A: r forB: Overall r: 0.40 Small (.20) n 5 10 15 20 25 30 0.4 09 15 21 26 32 37 OA 0.8 0.50 17 33 48 62 71 79 0.8 0.4 0.50 06 10 13 16 18 20 0.8 0.80 16 32 48 61 72 79 Medium (50) 0.4 0.4 I 0.8 0.40 26 59 80 91 96 0.50 63 97 100 100 100 100 0.8 0.4 I 0.8 0.50 14 32 44 57 67 76 0.80 65 96 100 100 100 100 Large (.80) 0.4 0.4 I 0.8 0.40 59 94 100 100 100 100 0.50 97 100 100 100 100 100 0.8 0.4 I 0.8 0.50 29 62 83 92 97 99 0.80 97 100 100 100 100 100 Alpha = .10 ES: Small (20) Medium (50) Large (.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OA 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 n 5 15 25 12 25 39 78 24 78 72 99 44 99 10 23 44 17 45 70 99 42 99 97 100 73 100 15 31 59 20 59 87 100 57 100 100 100 90 100 20 36 71 24 71 94 100 67 100 100 100 % 100 25 42 79 25 80 98 100 76 100 100 100 99 100 30 47 86 28 87 99 100 83 100 100 100 100 100 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group r coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in percent. Potvin "96 65 Results Table 4.4c Power of the AB Interaction For a 3(A) x 3(B) ANOVA With Repeated Measures on Two Factors. Test: A by B Interaction Alpha = .01 ES: Small (.20) Medium (JO) Large (.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: OA 0.8 OA 0-8 OA 0.8 OA 0.8 OA 0.8 OA 0.8 Overall r: 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 n 5 01 02 02 01 02 03 03 03 02 09 08 08 10 01 02 02 02 03 08 09 08 06 29 31 28 15 01 02 02 02 04 14 15 15 11 52 54 53 20 01 02 03 03 06 23 23 24 17 72 73 72 25 02 03 03 03 07 33 32 34 24 86 86 85 30 02 04 04 04 10 42 42 43 33 93 94 93 Alpha = .05 ES: r for A: r forB: Overall n n 5 10 15 20 25 30 Small (20) OA OA I 0.8 0.40 05 06 06 07 07 08 0.50 06 07 08 09 11 14 0.8 OA I 0.8 0.50 07 07 09 11 12 13 0.80 06 08 09 10 12 13 Medium (50) 0.4 04 I 0.8 0.40 07 10 14 17 21 24 0.50 12 22 35 46 57 67 0.8 OA I 0.8 0.50 12 24 35 46 56 66 0.80 12 23 36 47 58 67 Large (.80) 0.4 OA I 0.8 0.40 10 19 29 38 48 55 0.50 25 54 76 89 96 98 0.8 0.4 | 0.8 0.50 24 55 77 89 96 99 Alpha = .10 ES: Small (20) Medium (50) Large (.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: OA OS OA 0.8 OA 0.8 OA 0.8 OA 0.8 OA 0.8 Overall r: 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 0.40 0.50 0.50 0.80 n 5 10 12 12 12 13 22 19 21 18 39 36 38 10 12 14 14 13 18 34 35 35 31 68 69 67 15 12 14 16 15 22 48 48 48 41 85 86 85 20 14 17 17 18 27 60 59 60 52 94 94 94 25 13 18 19 21 32 69 69 70 61 98 98 98 30 14 23 22 22 36 78 77 77 68 99 99 99 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group r coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in percent Potv in^ 66 Results Table 4.5a Power of the A Main Effect For a 3(A) x 6(B) ANOVA With Repeated Measures on Two Factors. Test: Main Effect of Factor A (3 levels) Alpha = .01 ES: Small (.20) Medium (50) Larg r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 n 5 02 01 07 07 17 03 61 63 53 09 98 98 10 07 02 26 25 59 09 99 99 98 33 100 100 15 11 03 47 46 85 18 100 100 100 58 100 100 20 18 04 65 65 96 28 100 100 100 76 100 100 25 25 04 79 79 99 39 100 100 100 89 100 100 30 32 07 88 88 100 49 100 100 100 95 100 100 Alpha = .05 ES: r for A: r forB: Overall r: 0.40 Small (20) n 5 10 15 20 25 30 0.4 11 20 30 38 48 56 OA 05 0.52 06 08 11 12 14 18 0.8 0.4 0.45 23 52 72 86 93 96 0.8 0.80 23 52 73 86 93 97 Medium (50) 0.4 0.4 I 0.8 0.40 43 83 96 99 100 100 0.52 12 27 40 53 63 72 0.8 0.4 I 0.8 0.45 89 100 100 100 100 100 0.80 91 100 100 100 100 100 Large (.80) 0.4 0.4 I 0.8 0.40 84 100 100 100 100 100 0.52 28 60 81 93 97 99 OX OA | 0.8 0.45 100 100 100 100 100 100 Alpha = .10 ES: Small (20) Medium (50) Larg e(.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OX 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 n 5 20 12 37 36 59 22 96 97 92 43 100 100 10 31 14 66 66 91 40 100 100 100 74 100 100 15 41 18 83 84 98 53 100 100 100 89 100 100 20 51 20 92 93 100 67 100 100 100 97 100 100 25 61 24 96 96 100 75 100 100 100 99 100 100 30 69 27 98 99 100 83 100 100 100 100 100 100 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group r coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in percent Potvin '96 67 Results Table 4.5b Power of the B Main Effect For a 3(A) x 6(B) ANOVA With Repeated Measures on Two Factors. Test: Main Effect of Factor B ( 6 levels) Alpha = .01 ES: Small (.20) Medium (SO) Large (-80) r for A: 0 A 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OS 0.4 0.8 0.4 OS 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 n 1 5 02 05 01 04 10 43 05 45 37 94 12 92 10 05 13 02 13 36 92 11 93 86 100 39 100 15 06 23 03 24 60 100 20 100 98 100 66 100 20 10 37 04 37 79 100 31 100 100 100 84 100 25 11 50 04 50 90 100 42 100 100 100 93 100 30 16 61 05 61 96 100 54 100 100 100 98 100 Alpha = •05 ES: Small (20) Medium (SO) Large (.80) r for A: 0 A 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OS 0.4 0.8 0.4 OS 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 0.40 0.52 0.45- ' 0.80 n 5 08 14 06 15 28 70 14 72 63 99 30 99 10 13 30 09 30 58 98 27 98 96 100 63 100 15 18 45 12 46 81 100 40 100 100 100 85 100 20 23 61 12 59 92 100 53 100 100 100 94 100 25 29 72 14 72 97 100 65 100 100 100 98 100 30 34 81 16 80 99 100 75 100 100 100 99 100 Alpha = .10 ES: Small (20) Medium (50) Large (.80) r for A: 0 A 0 S 0.4 0.8 0.4 0.8 r forB: 0.4 OS 0.4 OS 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.45 0:80 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 n 5 15 23 12 25 40 82 23 81 76 100 43 100 10 21 42 16 42 70 99 40 99 98 100 74 100 15 27 59 19 58 88 100 53 100 100 100 91 100 20 34 73 21 70 96 100 65 100 100 100 97 100 25 40 82 23 80 99 100 76 100 100 100 99 100 30 46 88 25 88 99 100 84 100 100 100 100 100 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group r coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in percent Potvin '96 68 Results Table 4.5c Power of the AB Interaction For a 3(A) x 6(B) ANOVA With Repeated Measures on Two Factors. Test: ES: r for A: r forB: Overall r: n 5 10 15 20 25 30 A by B Interaction Small (.20) Alpha = .01 04 OA I 0.8 0.40 01 01 01 01 01 01 0.S2 01 01 02 03 03 04 0.8 OA I 0.8 0.4S 01 01 02 03 04 04 0.80 01 01 02 03 03 03 Medium (50) 0.4 OA | 0.8 0.40 02 02 04 05 07 09 0.52 03 07 14 23 33 42 0.8 OA I 0.8 0.45 03 07 15 23 32 42 0.80 03 07 14 24 32 42 Large (.80) 0.4 OA | 0.8 0.40 02 06 11 18 25 34 0.52 08 30 56 77 89 96 0.8 OA I 0.8 0.45 09 29 56 76 90 96 0.80 08 30 56 77 91 96 Alpha = .05 ES: r for A: r forB: Overall r: n 5 10 15 20 25 30 Small (.20) OA OA I 0.8 0.40 06 06 07 06 07 08 0.52 06 06 08 10 11 12 0.8 OA I 0.8 0.45 05 07 09 10 11 13 0.80 06 06 08 10 10 12 Medium (50) 0.4 OA I 0.8 0.40 07 09 13 15 19 23 0.52 10 20 34 45 55 66 0.8 OA I 0.8 0.45 12 20 33 46 56 68 0.80 11 21 33 46 55 67 Large (.80) 0.4 OA I 0.8 0.40 10 18 28 39 48 59 0.52 25 55 79 92 96 99 0.8 OA I 0.8 0.45 25 54 78 91 97 99 0.80 23 53 78 91 97 99 Alpha = .10 ES: Small (.20) Medium (50) Large (.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: OA 0.8 OA 0.8 04 0.8 04 0.8 04 0.8 04 0.8 Overall r: 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 0.40 0.52 0.45 0.80 n 5 11 12 11 i i : 13 19 19 19 17 37 35 36 10 11 12 13 12 15 32 32 33 28 68 68 67 15 13 15 16 14 21 47 46 46 40 87 87 87 20 12 16 17 17 24 58 58 60 52 96 95 95 25 13 18 20 18 29 68 69 68 61 98 98 99 30 14 20 22 20 35 77 78 77 71 99 100 100 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group r coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in percent Potvin '96 69 Results Table 4.6a Power of the A Main Effect For a 3(A) x 9(B) ANOVA With Repeated Measures on Two Factors. Test: Main Effect of Factor A (3 levels) Alpha = .01 ES: Small (.20) Medium (JO) Large (.80) r for A: 0.4 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 n 5 03 01 12 12 28 04 83 82 76 10 100 100 10 10 02 42 43 80 10 100 100 100 34 100 100 15 19 03 70 69 98 18 100 100 100 60 100 100 20 29 04 86 88 100 28 100 100 100 78 100 100 25 39 05 94 95 100 39 100 100 100 89 100 100 30 50 05 98 99 100 50 100 100 100 95 100 100 Alpha = .05 ES: Small (20) Medium (50) Large (.80) r for A: 0 A 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OX 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 n 5 13 06 33 34 61 14 98 98 96 28 100 100 10 28 08 69 70 96 28 100 100 100 62 100 100 15 41 10 89 88 100 41 100 100 100 83 100 100 20 54 12 96 97 100 53 100 100 100 94 100 100 25 64 15 99 99 100 66 100 100 100 97 100 100 30 74 16 100 100 100 74 100 100 100 99 100 100 Alpha = .10 ES: Small (.20) Medium (50) Large (.80) r for A: 0 A 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OS 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 n 5 22 12 49 49 76 24 99 100 99 42 100 100 10 41 14 81 81 98 40 100 100 100 75 100 100 15 55 18 94 94 100 55 100 100 100 90 100 100 20 67 21 98 99 100 67 100 100 100 97 100 100 25 76 24 100 100 100 77 100 100 100 99 100 100 30 84 27 100 100 100 84 100 100 100 100 100 100 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group r coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in percent Potvin '96 70 Results Table 4.6b Power of the B Main Effect For a 3(A) x 9(B) ANOVA With Repeated Measures on Two Factors. Test: Main Effect of Factor B (9 levels) Alpha .01 ES: Small (20) Medium (SO) Large (.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OX 0.4 0.8 0.4 OX OA 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 n 5 02 04 01 05 12 54 04 54 46 98 14 98 10 04 14 02 15 41 97 12 97 93 100 44 100 15 07 27 03 27 68 100 23 100 100 100 74 100 20 10 42 03 41 86 100 35 100 100 100 90 100 25 12 55 04 56 95 100 50 100 100 100 97 100 30 16 68 05 68 98 100 62 100 100 100 99 100 Alpha = .05 ES: Small (20) Medium (SO) Large (.80) r for A: 0 A 0 X 0.4 0.8 0.4 0.8 r forB: 0.4 0.8 0.4 OX 0.4 OX 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 n 5 09 15 07 16 29 78 14 77 72 100 33 100 10 14 33 08 32 65 99 27 99 98 100 68 100 15 20 49 10 49 86 100 45 100 100 100 89 100 20 24 64 11 66 96 100 58 100 100 100 97 100 25 29 77 13 77 99 100 72 100 100 100 99 100 30 35 85 15 86 100 100 81 100 100 100 100 100 Alpha = .10 ES: Small (20) Medium (SO) Large (.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: 0.4 OX 0.4 0.8 0.4 OX 0.4 0.8 0.4 0.8 0.4 0.8 Overall r: 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 n 5 16 25 12 25 43 87 24 87 82 100 46 100 10 22 45 14 44 77 100 40 100 99 100 79 100 15 30 62 17 62 92 100 57 100 100 100 93 100 20 35 75 19 76 98 100 70 100 100 100 99 100 25 41 85 21 85 99 100 80 100 100 100 100 100 30 48 91 24 92 100 100 88 100 100 100 100 100 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group i coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in percent Potvin '96 71 Results Table 4.6c Power of the AB Interaction For a 3(A) x 9(B) ANOVA With Repeated Measures on Two Factors. Test: ES: r for A: r forB: Overall r: A by B Interaction Small (.20) Alpha = .01 OA OA I 0.8 0.40 0.52 0.8 OA I 0.8 0.43 0.80 Medium (50) OA OA I 0.8 0.40 0.52 0.8 OA I 0.8 0.43 0.80 Large (.80) 0.4 OA I 0.8 0.40 0.52 0.8 OA I 0.8 0.43 0.80 n 5 10 15 20 25 30 01 02 01 02 01 01 02 01 02 03 03 04 01 02 02 02 03 04 01 01 02 03 03 04 02 03 04 05 07 10 04 09 17 27 40 50 03 06 17 26 38 50 03 09 16 26 40 51 02 06 14 20 29 39 11 35 65 85 95 99 10 37 66 86 94 98 10 35 64 85 95 98 Alpha = .05 ES: r for A: r forB: Overall T: n 5 10 15 20 25 30 Small (.20) OA OA I 0.8 0.40 06 06 06 07 06 07 0.52 06 06 08 10 11 13 0.8 OA I 0.8 0.43 05 07 09 09 10 12 0.80 06 07 08 10 12 13 Medium (50) 0.4 OA I 0.8 0.40 08 11 14 16 21 25 0.52 12 23 37 52 63 75 0.8 OA I 0.8 0.43 12 24 37 51 62 74 0.80 13 24 36 51 64 74 Large (.80) OA OA I 0.8 0.40 10 19 30 41 53 63 0.52 28 61 84 95 99 100 0.8 OA I 0.8 0.43 27 61 86 96 99 100 0.80 27 61 84 95 99 100 Alpha = .10 ES: Small (.20) Medium (50) Large (.80) r for A: OA 0.8 0.4 0.8 0.4 0.8 r forB: OA 0.8 OA 0.8 OA 0.8 OA 0.8 OA 0.8 OA 0.8 Overall r: 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 0.40 0.52 0.43 0.80 n 5 10 11 11 11 14 20 20 22 18 40 39 39 10 11 13 13 14 19 35 35 36 29 73 72 74 15 11 15 15 15 22 51 50 50 42 91 92 91 20 13 18 16 17 26 64 64 64 55 98 98 98 25 13 19 18 20 31 74 74 76 65 100 99 99 30 13 21 20 23 36 84 83 84 76 100 100 100 r = Average of Correlation Coefficients in a given Matrix ES = Effect Size (d) n = Sample Size per group r coefficients for AB pairs of a given matrix are equal to the lowest r among factors A and B. All power values are in percent. Potvin '96 72 Results B. Power Trends Alpha. Sample Size and Effect Size Similar to the one-way RM ANOVA and as predicted, power was found to increase as a, n or ES increased for any given experimental condition and test of the two-way RM design. The only exceptions were when power either approached level of significance or a value of 1.00, in which case, values were approximately equal across the levels of a conditioa Average Correlation Among Factors (Different Correlation Matrices) Main Effect of A Beginning with the main effects test of factor A, tables 4.4a, 4.5a and 4.6a reveal how when the Ave r of factor A increases from .4 to .8 and all other conditions are held constant (that is, compare columns 1 with 3 and columns 2 with 4 within a given ES), power increases in all three designs. In addition, the degree of increase in power as Ave r goes from .4 to .8, in general, is greater as K„, the number of RM of factor B, increases (i.e. power increase is greatest for 3 x 9 design and least for 3 x 3). Of further interest is the power trend observed among the four different correlation structures. Here we see that for almost any given a, n, ES and K„, power is lowest for a test having a matrix with an average correlation among trials of factors A and B equal to .4 and .8, respectively, (abbreviated 4-8 matrix), and is greatest (equally) for a test whose matrices involve an Ave r for A of .8 and Ave r for B of either .4 or .8 (abbreviated 8-4 and 8-8, respectively) while a test having a matrix with Ave r equal to .4 for A and B Potvin '96 73 Results Main Effect of A Main Effect of B AB Interaction 0.25-j 1 1 1 |—I—j 1 0.20 • Sample Size (n) Figure 4.05. A comparison of power across different correlation matrices for tests of a 3 x 6 RM ANOVA design. Note: Design based on smaB ES and a = .05. Potvin '96 74 Results (abbreviated 4-4 matrix) displays a magnitude of power in between. The only exception to this trend occurs at the upper end of the power curve where the difference in power between tests with different matrices diminishes as power approaches 1.00. The top graph in Figure 4.05 depicts this common power trend across matrices of the A main effects test for a 3 x 6 design with small ES and a = .05. Both the tables and this figure indicate that when the average correlation among A trials (Ave rA) is high (.8) and the average correlation among B trials (Ave rB) is either equal to (.8) or lower (.4) than Ave rA, power will be greater than when the Ave rA is lower (.4) and especially greater than when Ave rB (.8) surpasses Ave rA. Thus it seems power for the A main effects test under different correlation matrices in a two-way RM design is dependent on the average correlation of A and independent of the average correlation of B at least up until Ave r„ becomes larger than Ave rA at which point power is negatively affected. Main Effect of B Referring now to power values for the main effects test of factor B shown in tables 4.4b, 4.5b and 4.6b, we see sirnilar power trends across the different correlation matrices as those observed for the A main effects test. Examining the Ave r of factor B first, we see that as Ave r„ increases from .4 to .8 and all other experimental conditions are held constant, power for the B test increases (compare columns 1 with 2 and 3 with 4 for any given ES and a). Secondly, those matrices having an Ave rg = .8 and Ave rA = .4 or .8 (i.e. 8-8 and 4-8) produce a test with the most power, followed by a matrix with average correlations equal to .4 for A and B (4-4) while a matrix in which Ave r„ is less than Ave rA (8-4) displays the least power of the four. Again, as in the A test, we see that if the Ave r of the factor being averaged-over is above the Ave r of the main effects factor (B in this case), power of the test drops considerably. Likewise, if the Ave r of the pooled factor is less than or equal to the Ave r of the main effects factor (8-8 or 4-8), power will be highest. The middle graph of Figure 4.05 illustrates this general power trend among the different matrices of the B main effects test Potvin '96 75 Results Interaction Different power trends emerge for tests of interaction (AB), compared to those described previously for main effects tests. Referring to tables 4.4c, 4.5c and 4.6c, we see that an increase in the overall Ave r of the four matrices does not necessarily produce a concomitant increase in power for the AB test. Rather, those matrices in which at least one factor (A or B or both) has an Ave r = .8 result in the highest power for the AB test. As illustrated in the bottom graph of Figure 4.05, three of the four matrices (8-8, 8-4 and 4-8) produce relatively equal power for the AB test while the 4-4 matrix is the only one of the four which exhibits inferior power values across most n. These results suggest power for the interaction test in the two-way RM model remains the same among correlation structures with different overall Ave r so long as all matrices involved have at least one RM factor with an Ave r among its trials equal in magnitude to the highest overall Ave r observed among the AB matrices. It appears, therefore, that the mean magnitude of correlation coefficients among pooled trials of factor A or B is a more influential variable affecting the power of an interaction test than the overall Ave r of the AB matrix. Number of Repeated Measures of Factor B (K B ) - Differences Between Designs As in the one-way design under conditions of sphericity, a general increase in power across designs with greater K„ was also observed among the tests of the two-way RM model. However, the extent of this trend was not the same for all tests. Main Effect of A Of the three tests involved, the tendency for power to increase as K„ increases was most noticeable for the A main effects test. Contrasts in power between K B = 9 and K B =3 designs of this test were Potvin '96 76 Results generally high, with differences as large as .53-.58. Figure 4.06 illustrates this common pattern for the A test across different effect sizes and correlation matrices when n = 10 and a = .05. One exception to the rule seems to be for a test having a 4-8 matrix as shown in the top graph, where at small ES, power seems to be about equal across designs with different K„. However, under medium and large effect sizes (middle and bottom graphs) even this same test begins to demonstrate slightly greater power as K B increases. Another exception is when power for the A test under the other 3 matrices approaches 1.00 (medium and large effect sizes) in which case differences between designs are reduced and eventually negated. Main Effect of B A general tendency for power to increase with larger K„ also exists for the B main effect. However, unlike the A test, the increase is generally small (largest differences ranging between .16 and .21) and only observable when K„ = 9 and designs involved have a moderate to high degree of power (above -.40). Figure 4.07 demonstrates these differences in power between designs with distinct K B under the same conditions described for the A test. As can be seen in the top graph, power between designs are about equal for the four matrices under a small ES. At medium ES (middle graph), a design with 9 RM begins to show a slight power advantage over the other designs under most conditions except when the power of a test remains low (the 8-4 matrix) in which case a reverse effect occurs. However, under a large ES (bottom graph), even this test demonstrates a tendency towards greater power as K„ increases. Once power approaches extremely high values (1.00), all designs regardless of K„ have about equal power. The observed power trend associated with the B main effects test is similar to the general pattern observed in the one-way RM model under a constant r matrix. Potvin '96 77 Results A Test; Small Effect Size A Test; Medium Effect Size A Test; Large Effect Size Figure 4.06. Change in power for the "A" test of a two-way RM ANOVA as the number of levels of factor HB" increase under varying effect sizes and correlation matrices. Note: Based on a = .05 and n = 10. Potvin '96 78 Results B Test; Small Effect Size B Test; Medium Effect Size B Test; Large Effect Size Potvin '96 79 Results AB Interaction For the interaction test, the increase in power accompanying larger K„ is mainly evident, like the B test, only when K„ = 9. However, unlike the B test, a particular design's power does not need to be as high in order for this pattern to become noticeable (only above -.20-.30). Figure 4.08 illustrates how when effect size is small Cow power), designs with 3, 6 or 9 RM produce similar power values but as ES becomes larger (power increased above -.20), a design with 9 RM shows a slight power advantage over those with fewer RM under most correlation matrices. Only when a test involves a 4-4 matrix does this pattern fail to emerge since power among all 3 designs still remains fairly low. Under more favorable experimental conditions (greater n, a,) however, even this test demonstrates a similar power trend as the others (not shown in figure). Of the three tests in the two-way RM ANOVA, the AB test showed the smallest contrast in power as K„ increased, with differences between K B = 9 and K B =3 reaching a maximum of only .8 to .13. Between Test Comparisons: Main Effects and Interaction. When comparing power values between different tests of a two-way RM ANOVA, some distinctive patterns emerge under each of the correlation matrices and designs involved. Figure 4.09 provides a comparison of power among the main effects and interaction tests of all three designs under small ES, a = .05 and n = 30. Referring to the top graph, we see that for a 3 x 3 design involving a 4-4 matrix, power for the B test is slightly greater (.37) than that of the A test (.31) while both exhibit greater power over the interaction test (.08). The same design under an 8-8 matrix reveals a similar power order among the three tests except the differences in power between main effects tests and the AB test is considerably larger. For the heterogeneous matrices (4-8 and 8-4), the power order appears to be dependent on the magnitude of the Ave r of the main effects factor. As illustrated by the bar graphs, when Potvin '96 80 Results AB Test; Small Effect Size AB Test; Medium Effect Size AB Test; Large Effect Size Figure 4.08. Change in power for the "AB" test of a two-way RM ANOVA as the number of levels of factor "B" increase under varying effect sizes and correlation matrices. Note: Based on a = .05 and n = 10. Potvin '96 81 Results the Ave r of B is greater than that of A (4-8 matrix), the B test shows superior power whereas when Ave r of A is greater (8-4 matrix), the A test has the greater power. In addition, the degree to which power is greater for the B test over the A test under a 4^ 8 matrix (.79 - .16 = .63) is slightly larger than that observed when the A test dominates under a 8-4 matrix (.75 - .20 = .55). Although these comparisons are specific to conditions involving a small ES, level of significance of .05 and a sample size = 30, examination of most corresponding power values between tables 4.4a, 4.4b and 4.4c reveals a similar power order between tests across all four matrices. Exceptions are for those conditions in which power approaches 1.00. For a 3 x 6 design, the order of greatest to least power among tests changes (center graph). Under the 4-4 and 8-8 matrices, we see that the A test, in contrast to results observed for the 3 x 3 ANOVA, gains a substantial power advantage over the B test. Further, the difference in power between tests A and B under the heterogeneous matrices seems to favor the A test. Although the B test still shows greater power over the A test when the Ave r of factor B = .8, the difference between the two tests (.81 - .18 = .63) is less than that observed when A dorninates under an 8-4 matrix (.96 - .16 = .80). As in the 3 x 3 design, the interaction test again displays the least power across all four matrices. For a 3 x 9 design, the power order between tests is sirnilar to that observed in the 3 x 6 model with the exception that gains in power for the A test over the other tests are even further enhanced under all r matrices except 4-8 (see bottom graph). Potvin '96 82 Results 3 x 3 A N O V A 1.00 -r 0.80 -0.60 -°- 0.40 4- - -o.3t 0.20 -• 0.00 0.79 0.37 0.16 0.08 .0.75 0.79 0.20 0.14. 0".13 "0.13" 4-4 4-8 r Matrix 8-4 8-8 BA • B • AB 1.00 0.80 0.60 4 °- 0.40 + -0.20 0.00 3 x 6 A N O V A 056 .057. . . 0.81 0.56 0.34 0.18 0.08 0.12. 0.16 0-.13 -0.80 0.12. 4-4 4-8 8-4 r Matrix 8-8 BA • B • AB 3 x 9 A N O V A 1.00 0.80 s 0.60 i °- 0.40 0.20 0.00 1.00 0.85 "0:74 0.35. 0.16 0.07 0.13 0.15. 0.T2-0.86 .0.J3. 4-4 4-8 r Matrix 8-4 8-8 BA • B • AB Figure 4.09. A comparison of power between A, B and AB tests of the two-way RM ANOVA under different levels of factor B (3,6 and 9) and correlation matrices. Note: Designs based on small ES, n = 30 and a = .05. ji Potvin '96 83 Results HI. Two-Way Mixed ANOVA A. Power Tables Power values for the 2 x K mixed ANOVA with 3, 6 and 9 repeated measures on the second factor are given in tables 4.7a-c, 4.8a-c and 4.9a-c, respectively. Each table provides power for the different levels of alpha, effect size, sample size and average correlation coefficients involved. All values were derived by first calculating the corresponding noncentrality parameter (X) of each experimental condition and then converting A, to power using a cumulative distribution function given in DATASIM. Due to time restraints, power under conditions of nonsphericity (trend) was not determined. Means, standard deviations and complete correlation matrices of conditions involved are given in appendices 4.1-4.3 and 6.1-6.2, respectively. The description of power results for the rnixed model that follows may be facilitated by referring to these tables and appropriate figures when indicated. Potvin '96 84 Results Table 4.7a Power of the Groups Main Effect For a 2(Groups) x 3(Trials) ANOVA With Repeated Measures On One Factor. Test: Randomized Group Main Effect Alpha = .01 ES: Small (.20) Medium (50) Larg e(.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C c C C C n 5 01 01 04 03 10 07 10 02 02 10 07 30 19 15 03 02 17 11 52 34 20 04 03 25 16 71 50 25 04 03 34 22 84 63 30 05 04 42 27 92 75 Alpha = .05 ES: Small (JO) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C C n 5 06 06 14 11 29 22 10 08 07 26 20 58 43 15 10 09 39 28 79 62 20 12 10 50 36 91 76 25 14 11 60 45 96 86 30 16 12 69 52 99 92 Alpha = .10 ES: Small (JO) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C G C C C C n 5 12 12 24 20 43 34 10 15 13 39 30 73 58 15 18 15 53 41 89 75 20 20 17 64 50 96 87 25 23 19 74 59 99 93 30 25 21 81 66 100 97 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent. Potvin '96 85 Results Table 4.7b Power of the Trials Main Effect For a 2(Groups) x 3(TriaIs) ANOVA With Repeated Measures On One Factor. Test: Trials (Repeated Measures) Main Effect Alpha = .01 ES: Small (20) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C c C C C n 5 02 03 06 24 20 71 10 03 07 18 67 57 99 15 04 12 32 91 84 100 20 05 18 46 98 95 100 25 06 25 60 100 99 100 30 08 32 72 100 100 100 Alpha = .05 ES: Small (20) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C c n 5 07 12 19 51 44 92 10 10 20 39 88 81 100 15 13 30 57 98 96 100 20 15 39 72 100 99 100 25 18 48 83 100 100 100 30 21 56 90 100 100 100 Alpha = .10 ES: Small (20) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C c n 5 13 20 30 66 59 97 10 17 31 53 94 90 100 15 21 42 70 99 98 100 20 25 52 83 100 100 100 25 28 61 91 100 100 100 30 32 69 95 100 100 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent Potvin '96 86 Results Table 4.7c Power of the Interaction Test For a 2(Groups) x 3(Trials) ANOVA With Repeated Measures On One Factor. Test: Group by Trials Interaction Alpha = .01 ES: Small (.20) Medium (SO) Large (SO) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C C n 5 01 01 02 05 04 14 10 01 02 04 12 10 41 15 02 03 06 22 18 67 20 02 04 08 32 26 85 25 02 05 11 44 36 94 30 02 06 13 54 45 98 Alpha = .05 ES: Small (20) Medium (50) Larg e(S0) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C C n 5 06 07 08 16 14 34 10 06 09 13 30 26 68 15 07 11 17 44 38 88 20 07 13 22 58 50 96 25 08 15 27 69 61 99 30 09 17 31 78 71 100 Alpha = .10 ES: Small (20) Medium (50) Large (.80) r. 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C C n 5 11 12 15 25 23 48 10 12 15 21 43 38 80 15 13 18 27 58 52 94 20 14 21 33 71 64 99 25 15 24 38 81 74 100 30 16 27 44 88 82 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent. Potvin '96 87 Results Table 4.8a Power of the Groups Main Effect For a 2(Groups) x 6(Trials) ANOVA With Repeated Measures On One Factor. Test: Randomized Group Main Effect Alpha = .01 ES: Small (.20) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C c C C C n 5 02 01 05 03 12 07 10 02 02 12 07 38 20 15 03 02 21 12 63 36 20 04 03 31 17 81 52 25 05 03 42 23 91 66 30 06 04 52 29 96 77 Alpha = .05 ES: Small (20) Medium (50) Large (.80) r. 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C C n 5 07 06 16 12 34 22 10 09 07 31 20 67 45 15 11 09 45 29 86 64 20 13 10 58 38 95 78 25 16 11 69 46 99 88 30 18 13 78 54 100 93 Alpha = .10 ES: Small (20) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: e C C C C C n 5 13 12 26 20 49 35 10 16 14 44 31 80 59 15 19 15 60 42 94 77 20 22 17 72 52 98 88 25 25 19 81 60 100 94 30 28 21 88 68 100 97 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent Potvin '96 88 Results Table 4.8b Power of the Trials Main Effect For a 2(Groups) x 6(Trials) ANOVA With Repeated Measures On One Factor. Test: Trials (Repeated Measures) Main Effect Alpha = .01 ES: Small (JO) Medium (50) Large (.80) r. 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C C n 5 02 03 06 28 22 82 10 02 07 18 74 63 100 15 03 12 34 95 89 100 20 04 18 50 99 98 100 25 06 25 65 100 100 100 30 07 33 77 100 100 100 Alpha = .05 ES: Small (JO) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C c n 5 07 11 19 54 46 95 10 09 20 39 91 84 100 15 12 29 58 99 97 100 20 14 39 74 100 100 100 25 17 48 85 100 100 100 30 20 57 92 100 100 100 Alpha = .10 ES: Small (JO) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C c n 5 13 19 30 67 60 98 10 16 30 52 95 92 100 15 20 41 71 100 99 100 20 23 52 84 100 100 100 25 27 61 92 100 100 100 30 31 70 96 100 100 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent. Potvin '96 89 Results Table 4.8c Power of the Interaction Test For a 2(Groups) x 6(Trials) ANOVA With Repeated Measures On One Factor. Test: Group by Trials Interaction Alpha = .01 ES: Small (.20) Medium (SO) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C c C C C C n 5 01 01 02 05 04 15 10 01 02 03 12 10 45 15 01 03 05 22 18 73 20 02 03 07 34 27 90 25 02 04 10 47 38 97 30 02 05 13 59 48 99 Alpha = .05 ES: Small (20) Medium (SO) Large (.80) n 0A0 0.80 0.40 0.80 0.40 0.80 r pattern: C G C C C C n 5 05 06 08 15 13 35 10 06 08 12 29 25 70 15 07 10 16 45 38 90 20 07 12 21 59 51 97 25 08 14 26 71 63 99 30 08 16 31 81 72 100 Alpha = .10 ES: Small (20) Medium (SO) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: G C C C C C n 5 11 12 15 25 22 49 10 12 15 20 42 37 81 15 12 17 26 58 51 95 20 13 20 32 71 64 99 25 14 23 37 82 75 100 30 15 25 43 89 83 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent Potvin '96 90 Results Table 4.9a Power of the Groups Main Effect For a 2(Groups) x 9(Trials) ANOVA With Repeated Measures On One Factor. Test: Randomized Group Main Effect Alpha = .01 ES: Small (20) Medium (50) Large (.80) r. 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C c C G C n 5 02 01 05 03 13 07 10 02 02 13 07 41 21 15 03 02 23 12 67 37 20 04 03 34 17 84 53 25 06 03 45 23 93 66 30 07 04 56 29 98 78 Alpha = .05 ES: Small (20) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C C n 5 07 06 17 12 36 23 10 09 07 33 20 70 45 15 12 09 48 29 89 64 20 14 10 61 38 97 79 25 16 11 72 47 99 88 30 19 13 81 55 100 94 Alpha = .10 ES: Small (20) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r partem: C C G C C C n 5 13 12 28 20 52 35 10 16 14 46 32 83 60 15 20 16 62 42 95 78 20 23 17 75 52 99 89 25 26 19 84 61 100 95 30 29 21 90 69 100 98 C = Constant Correlation Matrix Pattern (e =1.0) ~ ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent. Potvin '96 91 Results Table 4.9b Power of the Trials Main Effect For a 2(Groups) x 9(Trials) ANOVA With Repeated Measures On One Factor. Test: Trials (Repeated Measures) Main Effect Alpha = .01 ES: Small (.20) Medium (SO) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C c C C C n 5 02 03 07 35 27 91 10 02 07 21 84 74 100 15 03 14 40 98 95 100 20 05 21 59 100 99 100 25 06 30 75 100 100 100 30 08 39 86 100 100 100 Alpha = .05 ES: Small (20) Medium (SO) Larg e(.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C C C C c n 5 07 12 21 61 52 98 10 09 21 44 95 90 100 15 12 32 65 100 99 100 20 15 43 81 100 100 100 25 18 54 91 100 100 100 30 22 64 96 100 100 100 Alpha = .10 ES: Small (20) Medium (SO) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C e C c C c n 5 13 20 32 73 66 99 10 : 17 32 57 98 95 100 15 21 45 77 100 100 100 20 25 56 89 100 100 100 25 29 67 95 100 100 100 30 33 75 98 100 100 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent. Potvin'96 92 Results Table 4.9c Power of the Interaction Test For a 2(Groups) x 9(Trials) ANOVA With Repeated Measures On One Factor. Test: Group by Trials Interaction Alpha = .01 ES: Small (20) Medium (SO) Larg iW) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: C C c C C C n 5 01 01 02 05 04 18 10 01 02 04 14 11 54 15 01 03 06 26 21 83 20 02 04 08 41 33 95 25 02 04 11 55 45 99 30 02 06 15 68 57 100 Alpha = .05 ES: Small (20) Medium (SO) Larg e(*0) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: c C C C C C n 5 05 06 08 16 14 39 10 06 08 12 33 28 77 15 07 10 17 50 43 95 20 07 12 22 66 57 99 25 08 14 28 78 69 100 30 08 17 34 87 79 100 Alpha = .10 ES: Small (20) Medium (50) Large (.80) n 0.40 0.80 0.40 0.80 0.40 0.80 r pattern: e e C C C e n 5 n 12 15 26 23 53 10 12 15 21 45 40 86 15 12 18 27 63 56 97 20 13 21 34 77 70 100 25 14 24 40 87 80 100 30 15 27 47 93 88 100 C = Constant Correlation Matrix Pattern (e = 1.0) ES = Effect Size (d) n = Sample Size per group r = Average of Correlation Coefficients in a given Matrix All power values are in percent Potvin '96 93 Results B. Power Trends Alpha. Sample Size and Effect Size Similar to the one-way and two-way RM ANOVA and as predicted, power was found to increase as ct, n or ES increased for any given experimental condition and test of the two-way mixed design. Exceptions were for those conditions in which power either approached level of significance or 1.00, in which case, values were approximately equal. Average Correlation The effects of increasing the Ave r among trials of the repeated measures factor in the rnixed ANOVA model are different depending on the test involved. For a main effect test on the grouping factor, an increase in Ave r from .4 to .8, as expected, causes a decrease in power for almost any given a, n, ES and K as shown in tables 4.7a, 4.8a and 4.9a. Figure 4.10 (top graph) also depicts this common trend for the groups test of all three designs under conditions involving medium ES, level of significance = .05 and n - 15. For the trials main effect and interaction tests, the reverse is true, with both displaying an increase in power under most conditions as Ave r is increased (center and bottom graphs, respectively and tables 4.7b-c, 4.8b-c and 4.9b-c). Number of Repeated Measures - Differences Between Designs Figure 4.11 illustrates the common power trends across designs under different effect sizes for each of the three tests of the mixed model. In most cases, the changes in power across K were similar to those seen in the two-way RM model. For the group test, like the A test of the RM model, there was a tendency for power to increase as K increased. This increase was greatest at larger effect sizes and when Potvin '96 94 Results Group x Trials Interaction 0.50 1 r 0.10 -0.05 -0.00 •-0.4 0.8 Average Correlation (ave r) Figure 4.10. Comparisons of power between two-way Mixed ANOVA Designs for the Groups, Trials and Group by Trials tests as the average correlation among repeated trials is increased. Based on medium ES, a = .05 and n = 15 per group. Potvin '96 95 Results Groups Main Effect (r = .4) Medium ES(.5) Small ES (.2) Number of RM (K) Trials Main Effect (r = .4) Medium ES (.5) Small ES (.2) Number of RM (K) Groups by Trials Interaction (r = .8) Largi ES (.8) Medium ES (.5) Small ES(.2) Number of RM (K) Figure 4.11. Comparisons of power between two-way mixed ANOVA designs with different levels of RM (3, 6 and 9) under varying effect sizes for the Group, Trials and Group by Trials tests. Based on a = .05, n = 15 per group. Potvin '96 96 Results the Ave r among trials was low (.4) as demonstrated in the top graph of this figure. Although not shown, when the Ave r was moderately high (.8), differences across designs were slight (.03 the most). For the Trials test, a similar trend was seen as that of the group test (center graph) with differences across K being greatest at medium and large effect sizes. However, unlike the group test, differences in power were found under both Ave r. The interaction test showed similar results as the trials test but only at larger effect sizes and when K = 9 (bottom graph). The group by trials results were also similar to those of the AB test in the two-way RM model. Of the values reported in tables 4.7a - 4.9c, the largest differences in power across designs for the group, trials and interaction tests were .15, .20 and .16, respectively. Minimum differences were found when power among designs was generally low (small effect sizes) or approached 1.00. Between Test Comparisons: Main Effects and Interaction Figure 4.12 portrays the general power order between tests under different Ave r for most experimental conditions of the two-way mixed ANOVA. As can be seen, at a moderately low Ave r (.4), the trials test displays the most power, followed by the group test while the interaction test exhibits the least. At a moderately high Ave r (.8), and as expected, we see power for the group test is reduced, dropping below that of the interaction test while power for the other tests increase from their values under Ave r = .4. This general power order observed across Ave r's for tests of the two-way model holds under most experimental conditions and designs. Exceptions exist for the 2 x 6 design under small ES and moderately low Ave r (.4) where power values remain about the same between main effect tests as well as at the extremes of the power curve where differences are rninimal. Potvin '96 97 Results 1.00 -r 0.90 -0.80 -0.70 -0.60 s • V t * 0.50 • 0. 0.40 • 0.30 -0.20 -; 0.10 • 0.00 --0.4 2 x 6 A N O V A •Groups Main Effect Trials Main Effect Groups x Trials Interaction 0.8 Average Correlation (r) Figure 4.12. A comparison of power between tests of a 2 x 6 Mixed ANOVA design as the average correlation among repeated trials is increased. Based on medium ES, a = .05 and n = 15 per group. Potvin '96 98 Discussion Chapter Five Discussion Potvin '96 99 Discussion The relationships between power and level of significance, sample size and effect size have been discussed previously in chapter 2 and therefore require no further elaboration. In addition, the effects of varying the average correlation coefficient on the power of a one-way RM ANOVA and tests of a two-way design with a single repeated measures factor have been explained in detail and need not be repeated here. The focus of this section therefore, will be to discuss those relationships less well understood. Particularly, the influence other important statistical parameters such as nonspherical correlation matrices, different levels of RM and the existence of multiple RM factors have on power. I. One-Way Repeated Measures ANOVA Power Comparisons Across Constant (Spherical) and Trend (Nonsphericall Correlation Matrices The observed differences in power between one-way ANOVA designs with constant and trend r matrices were in agreement with the hypothesis that power would be altered under heterogeneous r matrices. Our findings also agree with results from previous studies that have examined power under conditions of low epsilon. Marcucci (1986) and Muller and Barton (1986) showed, under a false null hypothesis (effect size greater than 0), that as epsilon was lowered, power for a RM ANOVA design was overestimated while Mendoza et al. (1974) found it to be underestimated. One of the main reasons for this difference in power between designs with high and low epsilon as seen in this and other studies is due to an increase in the variability of the F Ratio which occurs as epsilon is decreased. Eom (1993) indicated that the increase in the variance of the F ratio is a direct consequence of both an increase in the variability of its numerator (mean sum of squares for Trials or MS* ) and denominator (mean square error or MSEM ) terms. In his study he demonstrated using Monte Carlo Potvin '96 100 Discussion simulation, that under conditions of nonsphericity the null F ratio of a RM ANOVA test is more variable, resulting in a greater occurrence of outliers. Although his study involved examination of type I error rates rather than power, it is believed that an increase in the variability of the F ratio is also responsible for the power trends observed in the present study under T. In order to verify this was the case in this study, power under C for various conditions of the one-way design was also derived using Monte Carlo procedures and several statistics including the F ratio were compared to identical conditions under T. For all conditions examined, values averaged over three thousand ANOVA test replications revealed that the mean F , MS K and MSERR values between conditions with C and T remained relatively equal while the respective standard deviations (SD) of these statistics were all found to be greater under T. Figures 5.01 and 5.02 include plots of the distiibutions of the F values under small and medium ES respectively, and varying K. For comparison, the respective F distributions under the null hypothesis (Ho C) are also plotted. Among the distributions under the alternate hypothesis (Ha) of any one design (one graph), we see that the mean F ratios of tests under C and T are more or less similar but their SD are quite different with those under T all being higher. Moreover, we see that all distributions under T tend to be more concentrated in the tail regions. Under medium ES, _ these distributions appear to be more platykurtic (flatter) while under small ES they are more positively skewed than those of C. This greater concentration in the tails and altered shape of the distributions under T is due to the larger number of outliers that result when epsilon is low as explained earlier. Such an effect tends to spread out the distribution of F values over a greater range thereby altering the power of a design under a true hypothesis. Potvin '96 101 Discussion K = 3, Small Effect Size u O" ,= 5.00 •Null Ho ' Constant r Trend r F-Ratio Mean SB H o C 1.02 1.09 Ha C 2.64 2.22 Ha T 2.60 2.88 Power H a C 0.12 Ha T 0.16 T > C K = 6, Small Effect Size F-Ratio Mean £D H o C 1.01 0.67 H a C 1.85 1.07 Ha T 1.86 1.72 Power H a C 0.12 H a T 0.17 F-Ratlo K = 9, Small Effect Size F-Ratio Mean SB H O C 1.02 0.52 H a C 1.72 0.81 H a T 1.73 1.30 Power H a C 0.14 Ha T 0.20 T > C F-Ratio Figure 5.01. F distributions for one-way repeated measures ANOVA designs when effect size is small (.2) and the pattern of the correlation matrix is altered. Ha C - Alternate F distribution under constant correlation matrix. Ho C - Null F distribution under constant r matrix Ha T - Alternate F distribution under trend correlation matrix. Note: All designB based on ave r - .8, n - 30. X = Mean F Potvin '96 102 Discussion K = 3, Medium Effect Size o = 5.00 Tf «> 00 W W W F-Ratio F-Ratio Mean SD HoC 1.02 1.09 HaC 10.89 5.19 Ha T 11.12 7.16 Power Ha C 0.91 Ha T 0.82 C>T K = 6, Medium Effect Size •Null Ho 1 Constant r Trend r F-Ratio K = 9, Medium Effect Size = 2.59 T~i—l—r CO 00 O CM W W CM "V •Null Ho • Constant r Trend r F-Ratio HoC HaC HaT F-Ratio Mean SD 1.01 0.67 6.40 2.31 6.57 3.77 Power Ha C 0.95 Ha T 0.82 C>T HoC HaC HaT F-Ratio Mean 3Q 1.02 0.52 5.44 1.64 5.48 2.84 Power Ha C 0.98 Ha T 0.87 C>T Figure 5.02. F distributions for one-way repeated measures ANOVA designs when size size is medium (.5) and the pattern of the correlation matrix is altered. Ha C - Alternate F distribution under constant correlation matrix. Ho C - Null P distribution under constant r matrix Ha T - Alternate P distribution under trend correlation matrix. Note: All designB based on ave r - .8, n - 30. X - Mean F Potvin '96 103 Discussion Although power is known to be altered under conditions of nonsphericity, as shown in other studies (Marcucci, 1986; Muller & Barton, 1986; Grima, 1987; Mendoza et al., 1974 ), the reason(s) why power can sometimes result in values above or below those under C has not been well documented. One possible explanation may be exemplified when identical designs with different effect sizes are compared across Figures 5.01 and 5.02. Referring to any one graph in Figure 5.01, we see that when ES is small, all the mean F ratios of Ha fall below the F critical values of their respective null distributions (e.g. F238, .01 = 5.00 for a design with K= 3 levels). More importantly, the area of the F distribution past this critical point (to the right) is slightly greater under T than it is under C. With more variability in the F ratio under T, the greater number of outliers occurring above this point compared to C results in more tests achieving significance, therefore producing greater power for the design with a trend r matrix pattern. In contrast, when examining any one graph in Figure 5.02 under a medium effect size, all the mean F ratios under Ha are above the null F critical values. In addition, the area of the F distribution above the critical point is less under T than it is under C. Thus, with more variability in the F ratio under T, the greater number of outliers falling below (to the left) of this point compared to C reduces the power of designs involving a trend matrix. What seems especially important here in deciphering whether a trend design will show greater or less power depends not so much on the magnitude of the effect size involved but rather, on all the statistical factors that contribute to a design having a mean F ratio which falls above or below its F critical value. When the mean F ratio falls above its critical value, the greater variability in F that results for tests under nonsphericity will cause a decrease in power over identical designs under sphericity. Likewise, when the mean F ratio is belOw its critical value, the greater variance in F that results produces a design with more power under T. This helps explain why in this study, a design exhibiting low power under C showed slightly greater power when a nonspherical r matrix was involved while at the same time, another design with high power under C, showed less power under T. This may also clarify findings from the Mendoza et al. (1974) study in which a test's power was shown to be less under T when Potvin '96 104 Discussion power under C was moderate (.5) to high (.89). However, for reasons unclear, it does not account for the results of Marcucci (1986) and Muller and Barton (1986) who found power to be greater under T regardless of a test's power under spherical structures. The discrepancy in results may be due to the fact that the latter studies involved analytical approximations of power whereas the Mendoza et al. investigation, like this one, involved Monte Carlo simulation. Power Comparisons Across Designs With Different Levels of RM (XI Constant r Matrix Under a constant r matrix pattern, the general observation that power increased slightly as the number of repeated measures within a one way ANOVA design increased can, in most instances, be attributed directiy to an increase in a design's noncentrality parameter (X), as given in equation 2.01. The reader may recall that X represents the factor by which the F ratio departs from the central F distribution when a true difference between means exists and has a curvilinear relationship with power. The increase in X when K increases and all other parameters are held constant is the result of a greater numerator sum of squares for the trials effect which occurs because of the greater number of means that exist in a design with larger K (i.e. a greater number of deviations from the grand mean (i.e. ^((i.^ — (j.)2). Comparing X across designs with different K for those conditions in which power was determined using DATASIM (X values are given in appendices 7.1-7.4), shows that, under most conditions, an increase in K results in an increase in X and thus power. However, when effect size is small resulting in a design with low power (<.20), an increase in K seems to have little effect on improving power, despite an increase in X (see top graph of figure 4.04 again). This finding seems contrary to the expected relationship between X and power. Potvin '96 105 Discussion In order to help explain this occurrence, the F distributions from Figures 5.01 and 5.02 for the different designs under C were plotted against each other and shown in Figure 5.03 for each different ES. For comparison, the F distributions of designs under the null hypothesis (Ho C) are also shown (top graph). Examining the different distributions under a small effect size (center graph), we see that as the number of R M increases from 3 to 9, the F distributions change in shape going from a relatively flat and positively skewed distribution (K = 3), to one that is more leptokurtic and bell-shaped (K = 9). Also important to note is how the mean and standard deviation values of the alternate F's and their respective critical F values decrease as K increases. This decrease in mean F values as K increases is the result of a decrease in the numerator of the F ratio alone since mean M S K was found to decrease with higher levels of R M while MSERR (the denominator of F) remained constant across designs (these latter statistics are not shown). From this information, therefore, it appears that, for a design with K = 3 levels, despite having a larger F critical value (F3 = 5.00), the greater variability (more outliers) and flatter distribution of its F results in an almost equal area (power) occurring to the right of the critical point as that observed in designs with 6 or 9 R M levels. Thus, it seems the differences in F distributions observed between designs with different K under C and small effect size (low power) tend to dissipate the power advantages gained from having a greater number of R M levels. This helps explain why an increase in X across designs with greater K did not always produce higher power under such conditions. Although the decrease in mean F values across designs with greater K at first seem surprising given that X increases (with X and F directly related, you would think one would mirror an increase in the other), it should be remembered that the numerator of X (i.e. n ^ (\Ljj — \i)2 is not divided by numerator degrees of freedom as is F (SSk/k l^), and therefore is not reduced when K is increased. In contrast, when examining the distributions under a medium effect size (bottom graph of Figure 5.03), we see a different effect emerge. Although the change in the shape of the distributions when K is increased is similar to that observed earlier under a small effect size (with the exception that all Potvin '96 106 Discussion distributions are less positively-skewed), the area to the right of each design's respective F critical value is different between K. Here a design with 9 RM levels has most of the area of its distribution to the right of its F critical point (F9, 0 i = 2.59), followed by a design with K = 6 levels which has slightly less while a design with K = 3 displays the least. Again the mean and standard deviation values of F are found to decrease as K increases. In this case, the flatter distribution and greater variability of F for those designs with fewer repeated measures serves as a detriment, leading to less power while the more leptokurtic distribution and smaller variability of F among tests with more RM provides a power advantage (e.g. K 3 = .91, K« = .95 & K 9 = .98). Thus, under conditions of medium effect size (moderately high power) and constant r matrix pattern, the increase in X. that accompanies designs with more RM levels is not overshadowed by a less variable F but in fact, enhanced, leading to greater power for such designs. This explains why, when assumptions of sphericity are met, we see a tendency for power to increase as K increases under larger effect sizes. Trend r Matrix The changes in power observed between designs with different RM when the assumptions of sphericity are not met (low epsilon) may be explained by also comparing the pattern of F distributions across designs. Such a comparison is given in Figure 5.04 for conditions involving both small and medium effect sizes. Again, each design's respective null F distribution (under C) is shown (top graph). Referring to the center graph we see that when effect size is small, all three distributions are fairly skewed with the highest frequency of F values occurring at or below 1.0. The distributions, like those observed under C, differ from one another in how values are concentrated in the tail regions with K = 3 showing the most concentration and K = 9 the least. This difference in concentration is also reflected by the decrease in the SD and mean of F that occurs when the number of RM levels are increased as shown to the right of the graph. Although the distribution for the design with 3 levels has greater variability than those with 6 or 9, the area to the right of its F critical point is smallest as indicated by its power (.16). This is different to Potvin '96 107 Discussion Constant Correlation Matrix, Null F (Effect Size = 0) Xg = X 6 = X 3 F3, .01 = 5.00 *K = 3 •K = 6 K = 9 F-Ratlo Constant Correlation Matrix, Alternate F (Effect Size = .2) 900 F-Ratio Constant Correlation Matrix, Alternate F (Effect Size = .5) 500 400 8 8 F-Ratio •K = 3 K = 6 K = 9 •K = 3 'K = 6 K = 9 F-Ratio Mean SD K = 3 1.02 1.09 K = 6 1.01 0.67 K = 9 1.02 0.51 Power K = 3 0.01 K = 6 0.01 K = 9 0.01 Kg = Ke = K 3 F-Ratio Mean SQ K = 3 2.64 2.22 K = 6 1.85 1.07 K = 9 1.72 0.81 Power K = 3 0.12 K = 6 0.12 K = 9 0.14 Kg > Kg = K 3 F-Ratio Mean SD K = 3 10.89 5.19 K = 6 6.40 2.31 K = 9 5.44 1.64 Power K = 3 0.91 K = 6 0.95 K = 9 0.98 Kg > Kg > K 3 Figure 5.03. F Distributions for one-way ANOVA designs with 3, 6 and 9 repeated measures under a constant correlation matrix and varying effect size. Note: All designs based on ave r = .8, n = 30. F critical values given within graphs. Potvin '96 108 Discussion that seen earlier under C where the area was equal among the 3 designs. The reason for this discrepancy under T is due to differences in the number of outliers that reach significance among designs with varying K. When epsilon is low, the number of outliers that surpass the F critical for a design with 3 RM is less than that for a design with 6 and even lesser than that for a design with 9 (even though the magnitude of outlier values is greatest for K = 3). This is because the F distribution of K = 3 is less centrally concentrated around critical F, providing fewer opportunities to obtain outliers past this point (compare the slopes of each design's c&stribution at the F critical points). Referring now to the bottom graph of Figure 5.04 in which power for the different designs is relatively high due to a larger effect size, we see an almost opposite effect occurring. Here, the mean F values for the 3 distributions are all above their respective F critical values with K =3, again demonstrating the highest mean value followed by K = 6 and then K = 9. The shape of the distributions are all flatter and less positively skewed than those observed under a small ES with the K = 3 distribution showing the highest SD and the K = 9 distribution the least In this situation, we see how a larger portion of the distribution for K = 9 falls above its F critical point compared to that of K = 6 and K = 3 resulting in greater power for this design. What is more important to note from this graph is the larger concentration of F values which border the critical points of designs with higher K. Under T, such a concentration provides a greater opportunity for outliers to occur below the F critical (i.e. reduce power) among those designs with more RM. Although the example given in the bottom graph of Figure 5.04 does not adequately represent this effect (power is greatest for K = 9), other comparisons between these designs under medium and large ES do indicate this trend very well (see a = .05 & .10 in the power tables as well as the center and bottom graphs of Figure 4.04). Therefore, it appears that under conditions of nonsphericity, the degree to which power is affected by a difference in the number of RM is dependent on the magnitude of power a design acquires from other factors and the F distribution involved. When a design with small effect size flow power) has a low number of RM (e.g. K = 3), there seems to be a slightly lower probability of obtaining an F value above level of significance Potvin '96 109 i Discussion Constant Correlation Matrix, Null F (Effect Size = 0) K = 3 K ~ • 8 8 0 0 0 0 K = 6 K ,. K = g K F-Ratio K K K: fV F-Ratio Mean SD :3 1.02 1.09 6 1.01 0.67 ;9 1.02 0.51 Power : 3 0.01 :6 0.01 : 9 0.01 ; Ke = K 3 Trend Correlation Matrix, Alternate F (Effect Size = .2) F9,.oi = 2.59 — K = 3 - K = 6 K = 9 F-Ratio Mean SD K = 3 2.60 2.88 K = 6 1.86 1.72 K = 9 1.73 1.30 Power K = 3 0.16 K = 6 0.17 K = 9 0.20 Kg > K6 > K3 F-Ratio Trend Correlation Matrix, Alternate F (Effect Size = .5) 500 400 300 200 100 0 -fliiiMvv •K = 3 K = 9 F-Ratio F-Ratio Mean SD K = 3 11.12 7.16 K = 6 6.57 3.77 K = 9 5.48 2.84 Power K = 3 0.82 K = 6 0.82 K = 9 0.87 Kg > Ke = K3 Figure 5.04. F distributions for one-way ANOVA designs with 3, 6 and 9 repeated measures under a trend correlation matrix and varying effect size. Note: Al! designs based on ave r = .8, n = 30. F critical values given within graphs. Potvin '96 110 Discussion because of the lower concentration of F values occurring around its critical point compared to that observed for designs having larger numbers of RM (e.g. K = 9). In such a case, power generally tends to be lower. In contrast, when the same design involves a medium effect size (high power), the probability of obtaining an F value below level of significance can be less than that of designs with more RM for the same reasons mentioned above. In such a situation, power for a design with fewer RM maybe even greater than a design with larger K , as was evidenced for some conditions of this study. An important point worth mentioning when examining power across designs with different K under both conditions of sphericity and nonsphericity is that the results obtained are to a great extent dependent on the definition of ES (d) used in this study. Since d was fixed across designs, it should be expected that the tendency for power to increase as K increased would be less than what would have been observed if ES was defined by Cohen's/ This is because the latter statistic accounts for the reduction in numerator variance that occurs among designs with greater RM by enlarging the difference between means (i.e. making d bigger) for designs with more K. Therefore under a fixed / , power would be expected to increase more so as K increased. However, unlike the reasons given in this study, the increase in power across designs would be more attributable to larger differences between means than an increase in K. Potvin '96 111 Discussion II. Two-Way RM ANOVA Power Comparisons Across Different Correlation Matrices Main Effect Tests The power trends observed across different r matrices of the two-way RM ANOVA are in partial agreement with the hypotheses of this study. It was predicted, based on earlier pilot work, that power for a main effects test (e.g. B test) would increase as the Ave r among pooled trials (factor A) decreased. As shown from the simulation results of the two-way RM ANOVA, this was true only when the Ave r of the averaged-over RM factor was greater than that of the main effects factor (an 8-4 matrix in this example). If equal to or less than, power no longer was affected. The discrepancy between these results and findings from earlier pilot work can be traced to errors that were made in the pilot study when generating covariance matrices . Misleading information which resulted from these errors (r coefficients for AB pairs were entered incorrectly) was used in the formation of some of the hypotheses of this study. The reasons for seeing a change in the power of main effects tests with different r matrices was also somewhat different from anticipated. Since alterations in the r matrix are known to effect the error variance of a RM test (Winer et al., 1991), decreases in power for tests with different matrices are expected to be caused by an increase in the error term (denominator) of their F ratios. Although this was found to occur for all main effects tests (and interaction tests as well), what was surprising to observe and contrary to results from pilot work, was a concomitant increase in the numerator of the F ratio for those tests with matrices resulting in lower power. This is demonstrated in Table 5.1 in which the mean and standard deviation of several statistics generated from Monte Carlo simulation are given for two-way RM designs with medium ES, n = 1G and a = .05. Under any one particular design (i.e. 3 x 3, 3 x 6 or 3x 9) we see that the mean sum of squares among pooled trials of factor B and the mean square error for the B Potvin '96 112 Discussion test (MSB and MSQUU,, respectively) are lowest for those matrices showing the highest power and greatest for those demonstrating the least. The same holds true for the A main effect and AB interaction tests as well. A possible reason for this occurrence can be explained by exarnining the expected mean squares model for two-way R M designs, as given by Howell (1992). In this model, the expected F ratios of main effects and interaction tests are as follows: E(MSERR_B)) o :+aaj in In all three equations, we see that the error term (e.g. a2,. + aa 2 ^ for B test) is included along with the treatment effect (e.g. narfp) in forming the numerator of the F ratio. Therefore according to this model, changes occurring in the denorninator term also bring about similar changes in the numerator with repeated replications of an ANOVA test. This explains why the results of this study showed an increase in both the numerator and denorninator across different r matrices. Further evidence of an error variance contribution to the numerator term can be eorifirrned by subtracting the MSERR of any given test condition in Table 5.1 from its corresponding MS,** value. Such a subtraction will result in approximately equal Potvin '96 113 Discussion CO 1 CO X 5.82 1.07 0.28 0.20 0.20 0.20 32.74 5.52 1.40 O CO T t O CD CVJ 1.51 0.31 0.09 0.07 0.03 0.02 15.58 1.89 0.50 ion Matri 8-4 | 5.79 2.24 0.27 0.20 1.39 0.20 32.40 1.65 1.39 o s t O CM CM 1.49 1.03 0.09 0.07 0.23 0.02 14.87 0.82 0.50 3) Correlat 4-8 | 9.45 1.08 0.27 3.81 0.20 0.20 2.81 5.54 1.39 CO CO CO CM O) CM 7.73 0.31 0.09 1.29 0.03 0.02 2.73 1.88 0.50 i T t 6.12 1.49 0.68 0.60 0.60 0.60 11.49 2.56 1.15 IB If) i -CO t t ) i -2.60 0.59 0.24 0.20 0.10 0.07 7.08 1.09 0.44 CO CO X 3.92 1.25 0.29 0.20 0.20 0.20 22.30 6.47 1.47 O 00 i -o cn CM 1.24 0.43 0.12 0.07 0.04 0.03 11.33 2.69 0.67 (6 ion Matri 8-4 | 3.93 2.43 0.29 0.20 1.39 0.20 21.96 1.83 1.47 o l>- o O CM CM 1.23 1.40 0.12 0.07 0.29 0.03 10.54 1.16 0.67 3) Correlat 4-4 | 4-8 | 6.34 1.25 0.28 2.60 0.20 0.20 2.76 6.52 1.45 N CO O CM 00 CM 5.04 0.42 0.12 0.87 0.04 0.03 2.65 2.68 0.66 3) Correlat 4-4 | 4-8 | 4.34 1.65 0.69 0.61 0.59 0.60 8.09 2.92 1.17 CO CO CO CO uo 2.20 0.81 0.29 0.20 0.13 0.09 5.46 1.63 0.54 op CO X 2.09 2.10 0.36 0.20 0.20 0.20 11.62 11.90 1.91 CO CO CO CO CO CM 0.90 0.92 0.23 0.07 0.07 0.05 6.84 7.25 1.38 (3 ion Matri 8-4 | 2.09 3.35 0.36 0.20 1.39 0.20 11.83 2.72 1.94 If) CM T t CO CO CM 0.88 2.74 0.23 0.07 0.47 0.05 6.85 2.63 1.37 3) Correlati 4-8 | 3.30 2.07 0.35 1.40 0.20 0.20 2.68 11.64 1.87 CO N CM CM O) CM 2.74 0.88 0.23 0.45 0.07 0.05 2.57 6.84 1.33 T t I T t 2.47 2.48 0.76 0.60 0.60 0.60 4.61 4.65 1.34 CO OO O If) If) T-1.63 1.63 0.54 0.20 0.20 0.14 3.68 3.56 1.03 Design: Test < m °| < m £ < cn £ < CO ^ < co $ < CD § < m § Statistic | < co 5 CO CO CO 2 2 s" < m § tt I i cc cc £ ui UJ S to co ^  2 2-g < CO 3 U. U- LL o- Q- a. A & 2, 2 3 5 Q Q Q CO CO CO « CO CO Q Q Q CO CO CO D Q Q CO CO CO Potvin '96 114 Discussion trials effect totals (/wo^ p) across all matrices of a test. Interestingly, results from our initial pilot project did not show an increase in the numerator of F, instead producing constant MSuu. values across different r matrices. The reason for this is uncertain but it may have been due to the fact only a single replication (one ANOVA test) was performed instead of many in the pilot study. With one replication, the contribution of error variance to the numerator of F may not be observable since error due to random sampling is absent With changes in the numerator variance accounted for, it therefore seems evident that differences in the power of tests with different r matrices are the result of alterations in MSERR alone. Referring back to Table 5 . 1 , we see that for main effect tests, those matrices associated with lower power, regardless of design, produce higher MSERR values. For the A test, conditions involving an 8-4 or 8-8 matrix produced an equally low MSERR, while those with a 4 - 4 matrix resulted in a greater MSERR and tests having a 4 -8 matrix produced the largest MSERR. The B test showed similar results with the exception that the 8-4 matrix and 4 -8 matrix reversed their rank order. These findings seem to suggest that MSERR is dependent on the magnitude of the Ave r of both A and B pooled trials but not necessarily in every case. When the Ave r among trials of the main effects factor is equal to or greater than that of the averaged-over factor, MSERR seems only effected by the magnitude of the Ave r of the main effects factor and unaltered by the magnitude of the Ave r of the pooled-over factor. In contrast when the Ave r among trials of the main effects factor is less than that of the other factor, the magnitude of the Ave r of the pooled factor seems to also play an important role in influencing MSERR.. In this case, the influence on MSERR is somewhat similar to that observed when the Ave r of the RM factor is increased in a groups effect test of a two-way mixed ANOVA (see chapter 4 ) . Interaction The results observed across the four matrices for the interaction test agreed with the researcher's initial hypothesis that power would increase as the Ave r among trials of both factors increased (i.e. from Potvin *96 115 Discussion 4-4 to 8-8 matrix). Of surprise is the finding that an increase in only one and not both factors' Ave r was necessary to achieve an increase in power equal in magnitude as that obtained for the 8-8 matrix. Table 5.1 shows that the greater power and larger F ratios of AB tests with 4-8, 8-4 & 8-8 matrices over a test with a 4-4 matrix are due entirely to a reduction in MSE«R.AB (MS*B is lower also but for the same reasons outlined earlier). In addition, MSBMAB values for those matrices resulting in equal power are all similar. This seems to suggest the error variance of AB tests is affected solely by the magnitude of the Ave r of the pooled matrix (A or B) having the highest average correlation among its trials. Thus, when the Ave r of either the A or B pooled matrix is equal to .8, error variance remains the same as when the entire AB matrix is equal to .8. Only when both factor's pooled matrices fall below an Ave r of .8 is error variance increased, thereby causing a decrease in power. Evidentiy, the overall Ave r of the AB matrix does not seem to be a (tetermining factor. Power Comparisons Across Designs With Varying K . Although there was a tendency for all the three tests of the two-way RM ANOVA to show an increase in power as K„ increased, especially at larger effect sizes, reasons for the increase were different among the separate tests according to how numerator and denominator terms of respective F ratios were affected. Main Effect of A For the A main effect test, the increase in power (and F) across designs agreed with the researcher's original hypothesis that power for a main effect test would increase as the number of levels of the pooled factor increased.. Under matrices 4-4, 8-4 and 8-8, the increase seems entirely the result of an accompanying increase in mean M S A since the mean MSum.* of these matrices remained constant across designs (see Table 5.1). The increase in MS* itself, is a direct consequence of the number of trials of Potvin '96 116 Discussion factor B (i.e. K B), which in the numerator expression of eqn. 5.01, is depicted by variable b. As evidenced by this equation, an increase in b elevates MS A resulting in a bigger F value and thus power for the A test across designs. Interestingly, an exception exists for the condition involving a 4-8 matrix. Under this matrix, not only does the numerator increase but so does its denorninator (MSEW-A ) which is somewhat puzzling considering other matrices did not demonstrate this. Reraming to eqn 5.01 again, it may seem apparent that the increase in MSEHRA is due to the presence of b in the denominator as well. However, this does not explain why A tests involving the other three matrices maintain a constant MS™ A as b increases. The reason for this finding therefore remains unknown but it seems evident that the effect only occurs when Ave rA is below Ave r, (4-8 matrix). Main Effect of B For the B main effects test, the alterations in numerator and denorninator terms with increasing K„ are similar to those observed in the one-way RM design under a constant r matrix pattern. That is, a corresponding decrease in mean F values with larger K„ is due to a reduction in mean MS B alone since mean MSHUU, remains constant (see Table 5.1). Under this test, the decrease in MS„ across designs with higher K„ is caused by a greater number of degrees of freedom (df) in the numerator term (df3x3 = 2; dfjrf = 5; dfjrf = 8) which off sets any expected increase in their respective sum of squares trials (SSB). Unlike the A test, this pattern among the mean statistics of the B test is the same for all matrices involved. Even when the Ave r of B is below that of A (8-4 matrix) we see that MSHR».b remains constant (1.39) across designs. This is because the number of repeated measures of factor A, depicted by a in eqn. 5.02, does not change across designs as does b in eqn. 5.01 for the A test Despite the reduction in F across designs, the tendency for power of the B test under medium and large effect sizes to increase slightly as K B increased is due to an accompanying decrease in the variability of the F ratio (SDpB). As explained earlier for the one-way RM model under conditions of sphericity, Potvin '96 117 Discussion designs with larger K„ have a greater concentration of F values along their critical F borders leading to greater potential for statistical significance and therefore higher power when experimental conditions are sufficiently favorable. AB Interaction The changes in the mean statistics of the interaction test ( M S A a , MSHM-AB , FAB, SDFAB ) as KB increases from 3 to 9 are similar to those seen in the B main effects test. For any given matrix condition, as K B increases M S A B , FAB, SDFAB decrease while MSBRRAB remains constant. The decline in MSAB is, again, the result of a greater number of numerator df overshadowing the expected increase in SSAB (although the decline is very slight between 3 x 6 and 3 x 9 designs). MSHWAB remains constant across designs since according to eqn. 5.03 neither variable a or b are expressions in the denorninator term and should therefore not effect error variance of the AB test The decrease in FAB as K A increases therefore is due entirely to a decrease in MSAB. The general trend for power to increase as K B increases among more favorable conditions of the AB test, as described in the results section, is unfortunately not as evident in Table 5.1 since the power associated with those conditions shown is relatively low. For these conditions, the decrease in variability of FAB (SDFAB ) across larger K B , unlike that of the B test, does not provide a sufficient power advantage in favor of designs with greater K B Again, the reason for this can be attributed to the area of the F distribution bordering the critical point of each design. Those designs with a higher K B have a greater concentration of values over a shorter F interval compared with those having fewer K B . Therefore any condition (e.g. a change in E S ) that causes a more highly-concentrated distribution to shift left or right from its F critical value will bring about a greater change in the design's power when compared to a less concentrated distribution. Under less than optimal experimental conditions (low power), designs with more concentrated distributions (those with more K B ) may result in equal if not less power than designs having less concentrated distributions (those with fewer repeated measures). This Potvin '96 118 Discussion not only clarifies why power for the AB test was relatively constant across designs with different K„ under the conditions shown in Table 5.1 (medium ES, small n), but also helps explain why the general trend for power to increase as K B increases was not universal among other conditions and tests (B main effect) exhibiting low to moderate power. The simulation results observed for the interaction test coincided only partly with the original hypotheses that power for the AB test across designs would be dependent on both a factor's number of RM and magnitude of its average r. The findings of this study revealed that a factor's Ave r did not have an influential effect on power of the AB test as K„ increased. Power Comparisons Between Tests: Main Effects and Interaction The power differences between tests of the two-way RM ANOVA described in the result section and illustrated in Figure 4.09 can be explained by referring again to Table 5.1. Among the mean statistics shown , it is clear that power and F values for the interaction test under all matrices and designs given are smaller than those for main effects tests due entirely to a reduced numerator mean square trials (MSAB < MS A and MS B). The smaller MSAB is attributed to a smaller sum of squares (SSAB ) and greater numerator df. Again, it should be noted that these differences between main effect and interaction tests' statistics (particularly MS™*) are dependent on the definitions of ES (d) used in this study. Comparing power between main effects tests, several observations require explanation. One is why power for a 3 x 3 design under a homogeneous matrix (4-4 or 8-8) is almost always slightly greater for the B test even though both A and B tests have an equal number of RM and identical r matrices7. Examination of these tests' mean statistics reveals that, under such conditions, their values are quite similar. Slightly larger MS™* values appear to exist for the B test which may perhaps be responsible for 7 The condition in Table 5.1 is a rather poor example since power between these tests under the 8-8 matrix is equal. However, this is an exception since under most other experimental conditions, B clearly shows an advantage - refer back to the top graph in Figure 4.09 or the appropriate power tables for better examples). Potvin '96 119 Discussion the somewhat larger mean F ratios of B seen since MSERR between the two tests remained constant. For a simulation study, however, such small differences are negligible and therefore do not provide convincing evidence for the existence of a power difference between tests. Thus, there seems to be no theoretical reason why, under these conditions, F and power should differ between the two tests. Although only speculative, perhaps the results are reflective of a bias in the simulation process which favors one test (in this case, B) over the other. A clear explanation for this finding remains absent In contrast to the uncertainty in the 3 x 3 design, the differences in power between main effect tests observed among the remaining designs (3x6 and 3x9) under the same conditions are more easily interpretable. Under homogeneous matrices (4-4, 8-8), the larger power and F values for the A test among these designs are clearly due to a greater M S * which results from having a larger number of R M levels (KB or b) in the numerator term (compare equations 5.01 and 5.02). Interestingly, the variability of tests' M S , * ! , and MSERR (SDMSHU. and SDMSEW , respectively) under homogeneous matrices also seem affected by an increase in K B . However, these do not appear to be direcfly responsible for the differences in power seen between the two tests (with the exception of SDMSERR , a change in the variability of a particular statistic mirrors a similar change in its corresponding mean value). Another observation requiring elaboration is the alternating power advantage that occurs between main effects tests under heterogeneous matrices (4-8 and 8-4). For all designs in which the main effects test has a smaller Ave r across its trials than the other variable present (e.g. A test under 4-8 matrix or B test under 8-4 matrix), MSERR will be larger than that of the other main effects test under the same r matrix. M S ™ * is also larger but not in every case. Therefore, in contrast to the explanation given under homogeneous matrices, differences in power between tests under these matrices is due mostiy to changes in MSERR • Comparing power between the A test under a 4-8 matrix and the B test under an 8-4 structure, the higher F values and the slight power advantage the A test generates as K B increases is the result of having Potvin '96 120 Discussion a larger number of R M in its analytical functions (compare b in eqn. 5.01 with a in eqn. 5.02). The same also applies when comparing values between an A test under an 8-4 matrix and a B test under a 4-8 matrix. Noncentrality Parameter For Tests of Two-Way R M A N O V A The results of this simulation study provide helpful information for identifying some of the analytical expressions involved in the computation of a noncentrality parameter (and thus power) for tests of the two-way R M ANOVA. For the numerator terms of the main effects and interactions, it appears the analytical function for each test is identical to those given in equations 2.01, 2.02 and 2.03 for the two-way mixed model. The denominator term, however, appears to have its own unique expression for the different tests of the two-way R M design, as explained in the previous sections of this chapter. For main effect tests, it seems that the denominator of the noncentrality parameter involves a variable for each Ave r of the pooled matrices. The relationship between these variables and error variance is such that one variable, the Ave r of the main effects factor, acts to reduce the error variance as it increases in magnitude whereas the pooled factor causes error variance to increase as the magnitude of its average r and/or number of R M levels increases. These effects are comparable to those expressed in equations 2.19 (a 2 (1-p)) and 2.20 (a^/7 + (q-l)p] ) with the former exhibiting a similarity to the trials main effect of the two-way mixed model while the latter resembles the relationship expressed by the randomized groups main effect test of the same model. How exactiy these two expressions interrelate to produce a resultant effect on error variance in the two-way R M model is still uncertain but it appears any increase in MSERR (i.e. that part of the equation expressed by o /^7 + (q^l)p]) is not noticeable until either both factor's Ave r decrease or the Ave r of the pooled factor becomes greater than that of the main effects factor. It may be that the error variance is also affected by a third correlation variable, the Ave r of the AB coefficients which has not been thoroughly exarnined as of yet Potvin '96 121 Discussion For the interaction test, it appears that the error variance is affected by only one correlation variable, that being the highest average r among the two pooled r matrices. The denominator expression therefore is similar to eqa 2.19. Based on this information, equations for estimating the noncentrality parameter of each test of the two-way RM may be partly derived. The noncentrality parameter formulae might resemble the following functions, a z ( l -p A ) + (ft-l)(p a-p i t f)a' for a main effects test of factor A (5.04) c2(l-pB) + (a-l)(pA-pAB)a' for a main effects test of factor B (5.05) 'AB o-2(l-pM A X) for an A by B Interaction test (5.06) where jxty = the cell mean, u,,- and \ij = the marginal means for the levels of the A and B factors respectively, u. = the grand mean, n = the sample size, a and b = the number of levels of A and B factors respectively, pA and p„ = the average of the off-diagonal correlation coefficients of A and B pooled matrices respectively, p^ , = the average of the correlation coefficients of all AB pairs of the AB matrix, Potvin '96 122 Discussion PJMI is the highest average correlation among the two pooled matrices and = the error variance of the dependent variable involved. The accuracy of these equations of course is unknown at this point in time. Further testing and comparison of calculated values with simulated ones would be required in order to ascertain their reliability as well as decipher the currently unresolved relationship^) between variables affecting the error variance of these different tests. Potvin '96 123 Summary and Conclusions Chapter Six Summary and Conclusions Potvin '96 124 Summary and Conclusions The primary objective of this study was to provide researchers in the field of Human Kinetics with a more practical means of determining univariate power for one- and two-way repeated measures and two-way mixed ANOVA designs. This was accomplished by first generating power values using analytical and Monte Carlo simulation methods for varying conditions of sample size, effect size, and magnitudes and patterns of correlation and then making these estimates available in the form of power tables. A secondary purpose of this investigation was to exarnine and interpret those power trends less well known among designs failing to meet the assumption of sphericity or involving two RM factors. From the results of this study the following conclusions were drawn In general: When other conditions were held constant, an increase in either sample size, effect size, alpha or the average correlation (Ave r) across repeated trials resulted in a concomitant increase in power for most tests and designs of this study. An exception included power for the group main effect test of the two-way mixed ANOVA which decreased as the Ave r among repeated measures increased. The effects on power from these statistical parameters had been predicted and were based on changes in the numerator and/or denorninator terms of F and \ . With respect to power under conditions of nonsphericity: When power for a one-way RM design under a constant r matrix pattern (C) was low (<.20), power for the same test under a trend r matrix pattern (T) was found to be greater by up to .08. In contrast, when power was moderate to high (.50-.90) under C, power for the same test under T was found to be lower by up to .18. In addition, the degree to which power was greater or less under T increased as the number of repeated measures (RM) in a design increased. Depending on the effect size involved, a design displayed about equal power under C and T when values were Potvin '96 125 Summary and Conclusions between .20 and .40. The reason for the discrepancy in power under conditions of T was attributed to an increase in the variability of the F ratio or more specifically, MSK and MSERR . Regarding power as the number of RM (K) increased across designs: 1. Under conditions of sphericity, there was a common pattern for power to increase as K increased among all tests examined. However, the increase was mosdy observable at medium to large effect sizes and among one-way RM and main effects tests of two-way designs. The largest differences in power occurred between designs with 3 and 9 RM ranging from .15 to .58. In most cases, the power increase resulted from either a reduction in the variability of F or an increase in MSK as K became larger. It was noted that the extent of this power trend was dependent on the definition of effect size (d) used in this study. 2. Under conditions of nonsphericity, the tendency for power to increase as K increased was mainly observable at small effect sizes or when alpha was low (.01). At larger effect sizes and alpha, the trend tended to reverse itself with power becoming equal to or greater for designs with fewer K. Reasons for this were attributed to the lower concentration of F values occurring at level of significance for designs with fewer K which at small effect sizes (low power) served as a disadvantage but at large effect sizes (high power) provided a power advantage over designs with more RM. Regarding power differences between tests of a two-way design: 1. For most experimental conditions of the two-way mixed model, me trials main effect test showed the greatest power of the three tests involved followed by the groups main effect test which exhibited more power over the interaction test but only when the Ave r among repeated trials was Potvin '96 126 Summary and Conclusions low (.4). The reduction in power for the groups test as Ave r increased was, as expected, caused by an increase in the within-group variability or error variance associated with the effect 2. For main effect tests of the two-way RM model, under homogeneous r matrices the test with a higher number of "pooled" RM (i.e. A test) exhibited greater power over the other main effect test (i.e. B) except when the number of RM of both factors involved were equal, in which case, the B test, for reasons unknown, was slightly favored. Under heterogeneous r matrices, power was greatest for the main effect test having the highest Ave r among its trials. Differences in MSK were found to be responsible for the power order observed among tests under homogeneous r structures while changes in M S E M and to a lesser extent MS*, were accountable for those seen under heterogeneous matrices. For the interaction test, power was found to be the least among the three tests for almost all conditions examined. However, the lowered power observed was recognized as being largely dependent on the way in which the interaction test's effect size, d, was determined in this study. Regarding power among test with different correlation matrices in the two-way RM ANOVA: 1. For main effect tests, power was found to be greatest when the Ave r among trials of the main effect factor was high (regardless of whether the Ave r of the pooled factor was equal to or below it) and lowest when its Ave r fell below that of the pooled factor. The changes in MSou responsible for these differences in power across matrices seemed to be dependent on the magnitude of both factors' Ave r and the number of RM involved. 2. For the interaction test power was found to be greatest among those matrices in which at least one factor had an Ave r across its trials equal to .80. In addition, the Ave r of the overall (AB) matrix was found to be a less influential variable in affecting the power of the interaction test than the Potvin '96 127 Summary and Conclusions Ave r of either pooled (A or B ) matrix. The changes in MSERR observed appeared to be entirely dependent on the highest Ave r among the pooled matrices. In addition to providing a detailed interpretation and discussion of results obtained, findings from the two-way R M ANOVA simulations were used to construct preliminary analytical expressions of X for each tests of the design. From the results of this study and in consideration of future work in this area, it is recommended that: > the effects of nonspherical r matrices on power be exarnined for the two-way mixed and R M models as well in order to determine whether the findings observed under the one-way R M model are similar across the different R M tests of these designs. > power under many more r structures and levels of R M of the two-way R M ANOVA model be determined in order to identify the exact relationship between these variables and the error variance of a particular test so that specific functions of noncentrality parameters involved can be derived and validated. > the information derived from this and follow-up studies be used to create a user-friendly computer program capable of providing power estimates for designs with more than one R M factor and heterogeneous r structures. Potvin '96 128 References References 1. Austin, H. W. (1983). Sample Size: How Much is Enough. Quality and Quantity. 17, 239-245. 2. Barcikowski, R. S. (1973). A Computer Program For Calculating the Power When Using The T2 Statistic With Selected Designs. Educational and Psychological Measurement. 33,723-726. 3. Barcikowski, R. S., & Holthouse, N. (1972). A Computer Program For Calculating the Power of F Tests In Analysis of Variance and Covariance for Specified Alpha Levels, Sample Sizes, and Effect Sizes. Educational and Psychological Measurement. 22,169-172. 4. Betz, M. A., & Thompson, B. L. A Comparison of New Power Approximations in Repeated Measures Analyses. Unpublished manuscript. Arizona State University. 5. Bloch, D. A. (1986). Sample Size Requirements and the Cost of a Randomize Clinical Trial With Repeated Measurements. Statistics in Medicine. 5, 663-667. 6. Borenstein, M., & Cohen, J. (1988). Statistical Power Analysis: A Computer Program. Hillsdale, NJ: Lawrence Erlbaum Associates. 7. Borich, G. D., & Godbout, R. C. (1974). Extreme Groups Designs and the Calculation of Statistical Power. Educational and Psychological Measurement. 34,663-675. 8. Box, G. E. P. (1954). Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, II. Effects of Inequality Variance and of Correlation Between Errors in the Two-Way Classification. Annals of Mathematical Statistics. 25,484-498. Cited in Green, 1992. 9. Bradley, D. B. (1989). DATASIM. Lewiston, Maine: Desktop Press. 10. Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. 3rd Ed. Hillsdale New Jersey: Lawrence Erlbaum Associates. First and Second addition, 1969 and 1977, respectively. 11. Collier, R. Q.,. Jr., Baker, F. B., Mandeville, G. K., & Hayes, T. F. (1967). Estimates of Test Size For Several Test Procedures on Conventional Variance Ratios in the Repeated Measure Design. Psychometrika. 32, 339-353. Cited in Green, 1992. 12. Davidson, M. L. (1972). Univariate versus Multivariate Tests in Repeated-Measures Experiments. Psychological Bulletin. 22(6), 446-452. Potvin '96 129 References 13. Dodd, D. H., & Schultz, R. F. Jr. (1973). Computational Procedures For Estimating Magnitude of Effect For Some Analysis of Variance Designs. Psychological Bulletin. 79(6), 391-395. 14. Edgington, E. S. (1974). A New Tabulation of Statistical Procedures in APA Journals. American Psychologist. 29.25-26. Cited in Robey and Barcikowski, 1984. 15. Eom, H. J. (1993). The Interaction Effects of Data Categorization and Noncircularity of the Sampling Distribution of Generahzability Coefficients in Analysis of Variance Models: An Empirical Investigation. Unpublished doctoral dissertation, University of British Columbia, 16. Green, S., & Barcikowski, R. S. (1992). Power Analysis and Sphericity in Repeated Measures Analysis of Variance with Heterogeneously Correlated Occasions. Unpublished doctoral dissertation, Ohio University, 17. Greenhouse, S. W., & Geisser, S. (1959). On Methods in the Analysis of Profile Data. Psycrrometrika. 24,95-112. Cited in Mulvenon, 1993. 18. Grima, A. M. , & Weinberg, S. (1987). An Analysis of Repeated Measures Data: An Exploration of Alternatives (MANOVA). Unpublished doctoral dissertation, New York University, 19. Howell, D. C. (1992). Statistical Methods For Psychology. 3rd Ed. Belmont, Ca.: Duxbury Press. 20. Huynh, H., & Feldt, L. S. (1970). Conditions Under Which Mean Square Ratios in Repeated Measurement Designs Have Exact F-Distributions. Journal of the American Statistical Association. 65.1582-1589. 21. Huynh, H., & Feldt, L. S. (1976). Estimation of the Box Correction For Degrees of Freedom From Sample Data in Randomized Block and Split-Plot Designs. Journal of Educational Statistics. 1,69-82. 22. Koele, P. (1982). Calculating Power in Analysis of Variance. Psychological Bulletin. 92(2), 513-516. 23. Kraemer, H. C , & Thiemann, S. (1987). How Many Subjects? Beverly Hills, Ca.: Sage. 24. Kraemer, H. C , & Thiemann, S. (1989). A Strategy to Use Soft Data Effectively in Randomized Controlled Oinical Trials. Journal of Consulting and Clinical Psychology. 57(1), 148-154. 25. Lipsey, M. W. (1990). Design Sensitivity. Newbury Park: Sage Publications. Potvin *96 130 References 26. Lui, K., & Cumberland, W. G. (1992). Sample Size Requirement for Repeated Measurements in Continuous Data. Statistics in Medicine. JUL, 633-641. 27. Marcucci, M. (1986). A Comparison of the Power of Some Tests for Repeated Measurements. Journal of Statistical Computation and Simulation. 26,37-53. 28. Mendoza, J. L., Toothaker, L. E., & Nicewander, W. A. (1974). A Monte Carlo Comparison of the Univariate and Multivariate Methods for the Groups by Trials Repeated-Measures Design. Multivariate Behavioral Research. 9_,165-177. 29. Muller, K. E., & Barton, C. N. (1989). Ar^roximate Power for Repeated-Measures ANOVA Lacking Sphericity. Journal of the American Statistical Association. 84(406), 549-555. 30. Muller, K. E., & Barton, C. N. (1991). Correction to "Approximate Power for Repeated-Measures ANOVA Lacking Sphericity". Journal of the American Statistical Association. 86, 255-256. 31. Muller, K. E., LaVange, L. M., Ramey, S. L., & Ramey, C. (1992). Power Calculations for General Linear Multivariate Models Including Repeated Measures Applications. Journal of the American Statistical Association. 87,1209-1224. 32. Muller, K. E., & Peterson, B. L. (1984). Practical Methods For Computing Power in Testing the Multivariate General Linear Hypothesis. Computational Statistics and Data Analysis. 2,143-158. 33. Mulvenon, S. W., & Betz, M. A. (1993). Analytic Formulae For Power Analysis in Repeated Measures Designs. Unpublished doctoral dissertation, Arizona State University, 34. Olejnik, S. F. (1984). Banning Educational Research: Deterrnining the Necessary Sample Size. Journal of Experimental Education. 53.(1), 40-48. 35. PASS (1991). PASS (Power Analysis and Sample Size') Version 1.0 [Computer Program]. Kaysville, UT: NCSS; Jerry L. Hintze. 36. Pearson, E. S., & Hartley, H. O. (1951). Charts of the Power Function of the Analysis of Variance Tests, Derived From the Noncentral F-Distribution. Biometrika. 3j£, 112-130. Cited in Rotton and Schonemann, 1978. 37. Potvin, P. J., & Schutz, R. W. (1995). Predicting Power Trends in Repeated Measures ANOVA: A Prehminary Investigation Using a Random Number Generator Program. Unpublished term paper. Potvin '96 131 References 38. Robey, R. R., & Barcikowski, R. S. (1984). Calculating the Statistical Power of the Univariate and the Multivariate Repeated Measures Analyses of Variance For the Single Group Case Under Various Conditions. Educational and Psychological Measurement. 44(1), 137-143. 39. Rochon, J. (1991). Sample Size Calculations for Two-Group Repeated-Measures Experiments. Biometrics. 47,1383-1398. 40. Rotton, J., & Schonemann, P. H. (1978). Power Tables For Analysis of Variance. Educational and Psychological Measurement. 3J5, 213-229. 41. Rouanet, H. L . , . D. (1970). Comparison Between Treatments in a Repeated-Measurement Design: ANOVA and Multivariate Methods. British Journal of Mathematical and Statistical Psychology. 23.17-163. Cited in Green, 1992. 42. Schutz, R. W., & Gessaroli, M. E. (1987). The Analysis of Repeated Measures Designs Involving Multiple Dependent Variables. Research Quarterly For Exercise and Sport. 58(2), 132-149. 43. SOLO (1992). SOLO Power Analysis Version 1.0 [Computer Program]. Los Angeles, Ca.: BMDP Statistical Software, Inc. 44. St Pierre, R. G. (1980). Plaraiing Longitudinal Field Studies: Considerations in Deteimining Sample Size. Evaluation Review. 4(3), 405-415. 45. Sutcliffe, J. P. (1980). On the Relationship of Reliability to Statistical Power. Psychological Bulletin. 88(2), 509-515. 46. Tang, P. C. (1938). The Power Function of the Analysis of Variance Tests With Tables and Illustrations of Their Use. Statistical Research Memoirs. 2,126-149. Cited in Davidson 1972. 47. Tiku, M. L. (1967). Tables of the Power of the F-Test. Journal of the American Statistical Association. 62,525-539. Abstract only. 48. Vonesh, E. F., & Schork, M. A. (1986). Sample Sizes in the Multivariate Analysis of Repeated Measurements. Biometrics. 42,601-610. 49. Winer, B. J., Brown* D. R., & Michels, K. M. (1991). Statistical Principles in Experimental Design. 3rd Ed. New York: McGraw-Hill, Inc. Potvin '96 132 References Studies Used To Provide Empirical Data 1. Bozac, A (1990). Detenrrirring Exogenous Glucose Oxidation During Moderate Exercise. Thesis dissertation, University of British Columbia, 2. Cress, M.E., Thomas, D.P., Conrad, J.J., Kasch, F.W., Cassens, R.G., Smith, E.L. & Agre, J.C., (1991). Effect of Training on V O w . Thigh Strength, and Muscle Morphology in Septuagenarian Women. Medicine and Science in Sports and Exercise. 23(6), 752-758. 3. Davidson, (1989). The Effects of a Six Week Sea Level Exposure on the Cardiac Output of High Altitude Ouechua Natives. Thesis dissertation, University of British Columbia, 4. Gitto, A. (1996). Relationship of Excess Post-Exercise Oxygen Consumption to V Q w and Recovery Rate. Thesis dissertation, University of British Columbia, 5. Ienna, T. (1994). The Asthmatic Athlete: Metabolic and Ventilatory Responses to Exercise Without Pre-Exercise Medication. Thesis dissertation, University of British Columbia, 6. Lasko-McCarthey, P., & Davis, J.A. (1991). Effect of Work Rate Increment on Peak Oxygen Uptake During Wheelchair Ergometery in Men With Quadriplegia. European Journal of Applied Physiology and Occupational Physiology. 62(5), 349-53. 7. Lebrun, C. (1992). The Effects of the Menstrual Cycle and Oral Contraceptives on Athletic Performance. Thesis dissertation, University of British Columbia, 8. Mack, R.(1995). The Efficacy of Topical Ibuprofen in an Inflammatory Model: Delayed Onset Muscle Soreness. Thesis dissertation, University of British Columbia, 9. Rishiraj, N. (1996). The Role of Functional Knee Bracing in a Dynamic Setting. Thesis dissertation, University of British Columbia, 10. Sheel, W. (1995). The Time Course of Pulmonary Diffusion Capacity Changes FoUowing Maximal Exercise. Thesis dissertation, University of British Columbia, 11. Sheel, W.A., Lama, I., Potvin, P., Coutts, K.D., & McKenzie, D.C. (1996). Comparison of Aero-bars Versus Traditional Cycling Postures on Physiological Parameters During Submaximal Cycling. Canadian Journal of Applied Physiology. 21(1), 16-22. 12. Walton, P. (1996). Effects of Pre-Exercise Solid and Liquid Carbohydrate Feedings on High-Intensity Intermittent Exercise Performance. Thesis dissertation, University of British Columbia, Potvin '96 133 Appendix 3 CO 0) o c o < > o z < CC >» to • c O ~ .2> a> I Q LU < HI SI < o 5>l > a cn o cn o o CM o t O O ^ O O N O 0 ) 0 ) 0 ) 0 0 0 ) 0 O O O T-' O n »- co u> in in s CNJ C\J T - CO N CO O) CO CO 00 CO CO N O) o o o o o o o O) N co t o w i - 3 N 00 N CO CO O) CO d o o o o o o r^ - co co ^- co co in in oi N in O o o o T~ o o N O r - C J N O N 1- T- T- T- T- CM CO CO CO CO CO CO CO 3 3 .a E E w C\J C\J fvj o o > > o > CO _ i ca 0) E r -X UJ ,— cn cn CO in •> "> (0 co Q Q cn 08 oS 75 CP 0) JZ SZ a3 tr •c co CO "55 O O CD o o CO 6 6 CO CO CS CO _l 1 OJ CO in _ in in cn JZ cn cn - Cn o o C5 CD c o CO T - o cvi S O N 0 ) 0 0 ) o ^ o S s in cn co CO O) N o o o o i - in t*^  cn co o o o cn co co co T-^ in dI ^ o co o o o o CO Is"- o i - N T -< 2 S 5 c o •5 S i i I g x -C _ m a) •S N l i II <° > " a x> 811 i l l 0 u II > < t < UJ 111 E w CM <N 5 5 m x h | J a. P n Potvin '96 135 ' Appendix I I 8. S t o o o s CO CO o CO in CO CO o o o CO 3 I o o p 111 d d d d d 9 < o o in o CM CO o CM CM CO CO >>• CO cn LO 3 UJ o d d d d d d d m I CM s o o o co o CO CO LO LO 1 CO CM 8 •«* o —^ Lf) Ui o d d d d d m < I I T -r> CO CM LO 1 o LO 3 ft T * Ui o d d d d o d < J 1 1 1 o o o o o o LO • III d < < 1 1 1 CO cn LO CO CO i • UI d d d ft cn CO ^j—-cn cn cn o CM T f r 3 cn CO Ovei o d d d d d d d co cn LO cn 3 CM o 83 CO o CO m o d d d i o o d d CM cn in cn CM cn o cn LO < d d d O O d d 9 t— CO 00 CM LO LO d d d i i i d d w cn o CM CM CM 3 CO LO CM o CD T3 d d d CO CM CM T t CM CO o CM to oo CO q s CO o N-O < •o d d d d T— d d a. 3 O CO o CO o CO LO CO o CO > Q (7) CO CO CO CO CM CO T* T t X X X X X X X X CM CM CM CO CO CO CM CM . O « "8 o. O E o w • § C M co o DO Q co > 03 cn cn cn cn io io 9° 90 jo ^ ^ c c -O 3 3 O O C ( j d i W (O O N CO CD T 3 T J ± 2 O CD CD CO 2 — —>,|cn cn cn 3 CO CO co c c "> > ™ <§ § co co ^ — — L - N n ^ i n c o N c o CM L O CM C O O LO o co O CO ^ d f- O d d cn o co cn o 25 CO O T t d T-^ d CO T -o cn T t co co co o d d o T -co cn cn o o o o cn co CM co cn o o d d i - io o »; cn T -d d d h- T - i -CM io T -d d d C O o T - O L O CM CM T t O O L O 3 d i -co o o o o o o o L O O T - CO I O 2 8| < s s 1 CO T J II > o X s •= 11 I § g 8 I I D) O , » « S !K i SB i 8 I 2 1 Si < m < i' 11 11 m UJ UJ < CO < X E m < CO •S e UJ II 3 £ £ CM # £ Q o £ 9 > ii i Potvin '96 136 Appendix < > O 2 | T3 2 c o> "55 CD Q UJ < III C O *~ QJ o o CM o 3 2 o o •o o o co p CD u > x > Q co o i - -r- co o T - 't CO N CM O CO S 00 CO O) O d i d i d i d i d W S CO CO T t cn to eg co CM _ _ CO Ift CO CO iv . d d d d d o CO 5 d d co CM r>. oo oi oo CO S CO is CO OJ d d d d d d O O O r r I f l O CD N - * CD O ) O ) d d d d d d co co CO o d LO LO d d ^ O CM CO N CO CO CO - i - CO i - ; CM d T - co c\i d d CM CO CO CM CM in m CM ^j; co co r--d d d d d d o o o o ^ ^ i — —^ i — CO CO CO CD CO CO X X X X X X CM CM CO CO CM CM X ca E H g n . > in in ^ ^ LO LO CM CM a a 2 2 CO CO J O 5 3 0 ) <D CO CD 0) CD CD x: JZ CO CO T - CM co ^ in co W O N ( O O t CO O N O r O cn 3 "«t CO CO o d d CM co CO CO CO CM o d d co in y-N O l o d d CM co in s o in d T-^ d CO CM N CO T- y-y— CO O 5 in co r-. CM o d d o o o 0 o o 01 6 N r 8| < 2 5 1 > CD T3 II > 1 C o §• g x K 8 I I S (5 <? .c S .2 g. I « II M 3 2 * ? • ' i II < 111 111 O I-O o II g h I > > °- s > Potvin '96 137 References Appendix 3.0 FORTRAN Program For Calculating Noncentrality Parameter and Effect Sizes (d & f) C By Patrick TOWER-SEEKER' Potvin C Developed Fall "95. C*******Limitations of Program ************************************ C* * C* - Cohen's d calculations for interaction effect * C* for designs with 2 or 3 groups are performed using * C* formulae created by this author. Cohen's d values given* C* by this program for designs with more than 3 groups are * C* not correct since they are only based on 3 groups. * C* * C* - Calculations assume equal n per group (program does not * C* calculate weighted means or variances) * C* * C* - number of RM levels should not exceed 9!! * C* * c* * c* * C******** Preparing to Execute this program *********************** C In 'datafile', do the following: C30 - Line 1: give a title to youriprogram run (< 60 characters). C - Line 2: enter the # of levels for each factor as RG x RM. C col's 1-2 for RG, col's 4-5 for RM. C - Line 3: enter Uie format your means and SD's are arranged. C for example, (4f7.2) - make sure you include the C parentheses! C - Line 4: enter the pattern of the distribution of your means. C either as Tsven', 'Centered' or 'At ends'. C Start in column 1. Do not exceed 8 characters. C - Line 5 & +: C enter the cell means per group (RM means). C format should be as '17.2' and # of means not over 9. C one line of means per group. Start in col 1. C - Lines thereafter: C enter the Cell SD's per group, allowing only 1 group C of SD's per line. Format same as for means. C C In the "NCP.f program (this one), do the following: C - Optional: you can change the R and n values as you wish by C changing the default settings (set at 0). C To compile the program, type: C C 177 ncp.f-o luck (compiled program saved under 'Luck'). Potvin '96 138 C To execute program, type: C Luck at the unixg command prompt. C To view results, type: C vi output C*****DEFINrriONS***** C P number of levels for randomized group factor C60 Q number of levels for repeated measures factor C n sample size per group C U population cell mean C SD population cell standard deviation CIO R average correlation across RM trials C GPncp noncentrality parameter for group effect C RMncp noncentrality parameter for RM effect C INTncp noncentrality parameter for Interaction effect C GPf cohen's f for group effect C RMf cohen's f for RM effect C INTf cohen's f for interaction effect C GPd, RMd, INTd cohen's d per effect C GPdf, RMdf, INTdf degrees of freedom for each effect C Errdfl, Errdf2 degrees of freedom for between group C & within group error terms, respectively C dataflle datafile where program retrieves data C output fde where program outputs results C GSUM, RSUM, CSUM sum of means for grand, rows and columns C GMEAN, RMEAN, CMEAN grand mean, row and column marginal means C VARSUM, VARMEAN sum and grand mean of variances (SD's) C GPMAX, RMMAX, DIFFMAX Maximum group and RM marginal means C Maximum difference of differences C GPMIN, RMMIN, DIFFMIN Minimum group and RM marginal means C Minimum difference of differences C DIFF??(Q) Difference between group cell means C at each level of Q C PATT Pattern or distribution of cell means C Either: *Even', 'At Ends' or 'Centered' C90 TITLE Title you give to each program run. C*****VARIABLE DECLARATIONS***** INTEGER P, Q, n, A, B REAL*8 U(5,20), SD(5,20), R, + GSUM, RSUM(5),CSUM(20), VARSUM, + GMEAN, RMEAN(5),CMEAN(20), VARMEAN, + GPSUM2,RMSUM2,INTSUM2, + GPd,GPf,GPncp, RMd,RMf.RMncp, + INTdJNTfJNTncp, Potvin '96 + + + + GPMAX, GPMTN, R M M A X , RMMIN, DIFFAB(20), DIFFBC(20), DJTFAC(20), INTAB, INTBC, INTAC, ABMAX,ABMIN,BCMAX,BCMIN,ACMAX,ACMIN, INTMAX, GPdf, Errdfl, RMdf, Errdf2, INTdf CHARACTER* 15 PATT CHARACTER*70 FMT1, TITLE C49 C WRITE (FMT1.12) '(',Q,'(F7.2))' C12 FORMAT (AU2.A17) C***READ mean's and sd's factor levels and design pattern FROM DATAFTLE* OPEN (UNIT=1, FILE='2x3.daf, STATUS='OLD', + ACCESS='SEQUENTIAL', FORM=PORMATTED') READ (1,6) TITLE READ(.1,7)P,Q READ (1,12) FMT1 READ (1,8) PATT READ (1.FMT1) ((U(IJ), J=1,Q), 1=1^ ) READ (lfMTl) ((SD(I,J), J=1,Q), I=1,P) 6 FORMAT (A60) 7 FORMAT (I2.1X.I2) 8 FORMAT (A8) 12 FORMAT (A60) CLOSE(l) C*****Start Calculations for each r value and sample size***** OPEN (UNIT=2, FILE='dataflle,, STATUS='NEW', FORM="FORMATTED') C ***SetRtozero*** WRITE (2,14) '**** ', TITLE,' ****' WRITE (2,16)" 14 FORMAT (A5 A60A5) 16 FORMAT (A60) DOS A=l,5 C ***Change R values on each loop to .20, .40, .60, .80,1.00*** R = R + 0.20 C124 R= 0.00 C ***Write Tide of Execution e.g. Effect Size*** Potvin '96 IF (R .GE. 1.0) THEN R = 0.99 ENDIF C ***Write heading for correlation to output file' WRITE (2,9) "Average Correlation Used = ", R WRITE (2,9) *' 9 FORMAT (A30,f4.2) C ***Set sample size to 0*** n= 0 DO 10B=1,6 C ***Change sample size values to 5,10,15,20,25,30 with each loop* n = n + 5 C ***Write heading for sample size to output file*** WRITE (2,13)'Sample Size Used = \n WRITE (2,13) *' 13 FORMAT (9XA21.I2) C189 Calculate Grand Mean and Marginal Means C First set all means & sums to zero! GSUM = 0 GMEAN = 0 C Then convert P, Q & n integers to real numbers! RP = P RQ = Q Rn = n C Print *, RP, RQ, Rn DO 151=1 J> RMEAN (I) = 0.0 RSUM(I) = 0.0 15 CONTINUE DO20J=l,Q CMEAN (J) = 0 CSUM(J) = 0 20 CONTINUE Potvin '96 References C Then do calculations DO 251=1 J> DO30J=l,Q RSUM (I) = RSUM (I) + U(I,J) CSUM (J) = CSUM (J) + U(I,J) GSUM = GSUM + Ua,D 30 CONTINUE 25 CONTINUE C221 GMEAN = GSUM/(RP*RQ) C PRINT *, GSUM, GMEAN DO 321=1^ RMEAN(I) = RSUM(T)/RQ 32 CONTINUE DO 33 J=1,Q CMEAN(J) = CSUM(J)/RP 33 CONTINUE C DO 17 I=1J> C DO 18 J=1,Q C PRINT *, CMEAN(J), CSUM(J) C PRINT *, RMEAN(I), RSUM (I) C 18 CONTINUE C 17 CONTINUE C Calculate Mean Variance from Standard Deviation in datafile**** C243 First set variance mean & sum to zero! VARMEAN = 0 VARSUM = 0 C Then do calculations DO 35 I=1J> DO40J=l,Q VARSUM = VARSUM + (SD(I,J)**2) 40 CONTINUE 35 CONTINUE VARMEAN = VARSUM/(RP*RQ) C*****Write Variance sum to screen - error checking***** C PRINT *, VARSUM C***** Calculate the MEan deviations for Group, RM & Inter, effects GPSUM2 = 0 RMSUM2 = 0 INTSUM2 = 0 Potvin '96 142 45 DO 45 1=1,P GPSUM2 = GPSUM2 + (RMEAN(I) - GMEAN)**2 CONTINUE DO50J=l,Q RMSUM2 = RMSUM2 + (CMEAN(J) - GMEAN)* *2 50 CONTINUE DO 55 I=1P DO60J=l,Q INTSUM2 = INTSUM2 + (U(I,J) - RMEAN(I)-CMEAN(J>GMEAN)**2 60 CONTINUE 55 CONTINUE C Print *, GPSUM2, RMSUM2, INTSUM2 C286 C******Calculate MS error for each design effect******* GPerr = VARMEAN*(1.0 + ((RQ-1.0)*R)) RMerr = VARMEAN*(1.0-R) C Print *, VARMEAN, GPerr, RMerr Print 77, R, n 77 FORMAT (2X, f4.2,2X, 12) C******Calculate NCP for each effect******************* GPncp = Rn*RQ*GPSUM2/GPerr RMncp = Rn*RP*RMSUM2/RMerr INTncp- Rn*INTSUM2/RMerr C Print *, GPncp, RMncp, INTncp C Print *, INTSUM2 C******Calculate Cohen's f for each effect***** GPf = SQRT(GPSUM2/(RP*GPerr)) RMf = SQRT(RMSUM2/(RQ*RMerr)) INTf = SQRT(INTSUM2/((((RP-l)*(RQ-l))+l)*RMerr)) C Print*, GPf, RMf, INTf C******Calculate Cohen's d for each effect***** C First find largest and smallest means per effect GPMAX = -999999.0 DO 651=1 J> IF (RMEAN(I) -GT. GPMAX) THEN GPMAX = RMEAN(I) ENDIF Potvin '96 References 65 CONTINUE C325 GPMIN = 999999.0 DO 701=1 IF (RMEAN(I) .LT. GPMIN) THEN GPMIN = RMEAN(I) ENDIF 70 CONTINUE RMMAX = -999999.0 DO 75 J=1,Q IF (CMEANCD .GT. RMMAX) THEN RMMAX = CMEAN(J) ENDIF 75 CONTINUE RMMIN = 999999.0 DO80J=l,Q IF (CMEAN(J) .LT. RMMIN) THEN RMMIN = CMEAN(J) ENDIF 80 CONTINUE DO 85 J=1,Q IF (P .EQ. 3) THEN DIFFAB(J) = U(1J) DIFFBC(J) = U(2 J) DIFFAC(J) = U(1,J) ELSE IF (P L.T. 3) THEN DIFFAB(J) = U(U) C DIFFBC(J)= 0 C DIFFAC(J)= 0 ENDIF 85 CONTINUE ABMAX=-999999.0 DO90J=l,Q IF (DIFFAB(J) .GT. ABMAX) THEN ABMAX = DIFFAB(J) ENDIF 90 CONTINUE ABMIN = 999999.0 D095 J=1,Q IF (DIFFAB(J) .LT. ABMIN) THEN ABMIN = DIFFAB(J) ENDIF 95 CONTINUE INTAB = ABS(ABMAX - ABMIN) BCMAX = -999999.0 Potvin '96 144 -U(2,J) -U(3,J) -U(3J) -U(2J) DO 91 J=1,Q IF (DIFFBG(J) .GT. BCMAX) THEN BCMAX = DIFFBC(J) ENDIF 91 CONTINUE BCMIN = 999999.0 D0 96 J=1,Q IF (DIFFBC(J) .LT. BCMIN) THEN BCMIN = DIFFBC(J) ENDIF 96 CONTINUE INTBC = ABS(BCMAX - BCMIN) ACMAX = -999999.0 D0 92J=1,Q IF (DIFFAC(J) .GT. ACMAX) THEN ACMAX = DJTFAC(J) ENDBF 92 CONTINUE ACMIN = 999999.0 DO 97 J=1,Q IF (DIFFAC(J) .LT. ACMIN) THEN ACMIN = DIFFAC(J) ENDIF 97 CONTINUE INTAC = ABS (ACMAX - ACMIN) INTMAX = MAX(INTAB ,INTBC,INTAC) C Print*, GPMAX, GPMN C Print *, RMMAX, RMMIN Print *, ABMAXABMIN.INTAB Print *, BCMAX,BCMIN,INTBC Print *, ACMAX.ACMINJNTAC Print *, INTMAX C370 Then calculate Cohen's d! GPd = (GPMAX - GPMIN)/SQRT(VARMEAN) RMD = (RMMAX - RMMIN)/SQRT(VARMEAN) INTO = INTMAX/SQRT(VARMEAN) C Print *, GPd, RMd, INTd C******Calculate degrees of freedom for each effect****** GPdf =RP-1.0 Errdfl =RP*(Rn-1.0) RMdf = RQ -1.0 Errdf2 =RP*(RQ-1.0)*(Rn-1.0) INTdf =(RP-1.0)*(RQ-1.0) Potvin '96 C******WRITE Cohen's d,f & NCP values and df s to output file******** WRITE (2,11) T>ESIGN\PATTERN', EFFECT.'DF, 'd', 'f, 'NCP 11 FORMAT (2X A6.2X.A7.3X A6,1X,A5,3(2XA11)) IF (P £Q. 1) THEN WRITE (2,21) P,' x', Q, PATT, "RM'JlMdf JiMdJlMf JIMncp WRITE (2,31) •RMerr', Errdf2,'--','-','-' ELSE IF (Q .EQ. 1) THEN WRITE (2,21) P," x', Q, PATT, 'GROUP',GPdf,GPd,GPf,GPncp WRITE (2,31) 'GPerr', Errdf 1,'--','-', ELSE WRITE (2,21) P,' x', Q, PATT, 'GROUP'.GPdf.GPd.GPf.GPncp WRITE (2,21) P," x', Q, PATT, 'RM',RMdf,RMd,RMf,RMncp WRITE (2,21) P,' x ', Q, PATT, 'INT'JNTdf,INTd,INTf JNTncp WRITE (2,31) 'GPerr', Errdfl,'--','-','-' WRITE (2,31) 'RMerr', E r r d f 2 , ' - - ' ENDIF 21 FORMAT (2XJl,A3,Il,lX,2X,A8,2X,A6,lX,f5.1,3(2X,fl 1.3)) 31 FORMAT (20X.A6,1X/5.1,3(8X, A2.3X)) WRITE (2,41)'. WRITE (2,41)'' 41 FORMAT (A70) C417 10 CONTINUE WRITE (2,41)' WRITE (2,41)'' 5 CONTINUE C******* Print to screen and write to output file the cell, marginal & C grand means and the population variance. PRINT *, 'Cell Means Row Marginal Means' DO 17 1=1 J> PRINT 18, (U(I,J), J=1,Q), RMEAN (I) 17 CONTINUE PRINT 18, (CMEAN (J), J=1,Q), GMEAN 18 FORMAT (10(f7.2)) PRINT *, 'Column Marginal Means Grand Mean' PRINT *, 'Variance Mean =' , VARMEAN Potvin '96 References WRITE (2,19) Population Means' DO 241=1 J» WRITE (2,22) (U(I,J), J=1,Q), RMEAN (I) 24 CONTINUE WRITE (2,19) ' ' WRITE (2,22) (CMEAN (J), J=1,Q), GMEAN WRITE (2,19)' ' WRITE (226) Population Variance =', VARMEAN 19 FORMAT (A20) 22 FORMAT (10(F7.2)) 26 FORMAT (A22.F7.2) CLOSE(2) END Potvin '96 147 i Appendix c CD "cS CO CM 9> .N CO o o o o o o O If) o T - CM CJ) CD CD o o o o o o o o o CM LO 00 cn cri o i o o o o o o o i n o CM T * cri cri cri o o o o o o o o o o o o cri cri cri = .="CD CO, 2_ j _ CM LO 00 I CO LO CO CM CD II 9> .N CO I o o 0 o LO O CM T t 01 cri o o o o o o LO CO cn cn cn o o o o o o co O T t T - T}- cq cri cn cn o o o o o o CM O CO T-; CO T J cn cn cn o o o o o o 00 o CM o CM CO cri cri cri o o o o o o T t o CO o i— 1— cri cri o i o o o o o o o o o o o o cri o i o i = .=• aT 2 TJ CT E CD CO CM LO 00 c 1 cn co co io co CM CO II CD .N CO T K J 1 Ui o o o o o o 0 LO o T— CM T t 01 o i o i o o o o o o LO 00 cn cn cn o i o i o r -o o £ 9 ° cn cn cn o o o 0 LO o m s o T - ; CO CO 01 cj) cn 0 LO o LO c\i o CM T— O T - co i q 01 o i cn o o o o o o O LO O •r- CM T t cn cn cn o LO o i n s o N c o o p T - CO cn cn cn o o o o LO o LO CM o o T — CM o i o i o i o LO o LO CM o CM CO o O O o i o i o i o o o o o o o o o o o o cri o i o i = .2 oT 55 TJ p CM LO CO CD N T5 CD 3= CD TJ C co c g> 'co CD TJ c CD > D) CO "5 _co Q5 o 75 h. .o 8 > CD O CO T3 C B co Potvin '96 148 Appendix CM • X c ® . Q. < CO c o> CO CD o < > O CO I o 5 o c CO a> £ co c "5> >_ ca 2 TJ C CO a> O O J 0) II 5 co E CO CD N co o UJ 75 . c i 1 o o q o o o o CM o o cri cri cri cri CO o LO o o o CM o LO CO o o CM C> cri cri cri LO o CO o o CO v— o 8) o CO cri cri cri cri o o o CM o CO CM o CM —^ cri cri cri cri CO o o CO o o o CO o CO cri cri cri CM o cn o o o o o CO cri cri cri o i n cn o o o o LO o o o o CO cri cri cri "ca c CM co Margi ca tt3 CO LO in m II ¥ T3 CD CD N CO Z £ UJ w — Si 0) CO m CM cn cn CM cn C O o L O CM cri o o L O cri o o cri o o co cri .5 ? ca co L O o o CM cri oo a> • > | | » C O "O CO o o cn CD S> CM CO o o o cri .5 I CD N co UJ cn o o cri o o CO cri o 3 cri o s? cri o o • t cri o o co cri o s cri o " $ cri 8 CO cri cn o C O cn o o o cri I CM CO .8 l i CO X CO tn < tn Si 3 tn SI 3 .8 CO OJ • II = 31 | « > C M (0 m CD N CO Z UJ cn CM CO o o cn ^ in II CO o o CM cri CO o o cn TJ CD SI a> CM CO o o o cri CO •I I CD N CO *-> U UJ cn cn CO X CO < CM tn < CO o L O CM cri ca o o L O cri o L O CM cri o o o cri CO • II 5. CO CD 5> co - J - J m CD N CO CM .8 o t UJ o o •<* cri o o CO cri o 0 CO 1 cri o o •<* cri o o o cri I CM tn SI < CO .8 l i 0 ) N "co ts CO 3= 03 TJ C ca c cn co co TJ c CO > 5) co o tn 75 o 75 k_ o s • II CO > CD Q CO TJ C CO CO Potvin '96 149 Appendix .5 8i cn oo m CM • II "co E CO CD N CO UJ co CM CM CO O O O O cri o o o o CM cri o o in h-cri o o o m T— cri o o in CM T — cri o o o o cri o o in r v o cri o o o m o cri o o in CM o cri o o o o o cri co ,c 8 to co .c 81 cn CO CO at I 1 0 m • ll 3 TJ CJ) CO CM CD N CO UJ cn CM CO o o o in CM cri o o o o in cri o m cn o o in rv co cri o m CM co cri o o o in CM cri o in r v 00 cri o o in CM Y— cri o in CM co o cri o o o o o cri co .c 8 CO 81 i cn co co w 8 CO CO II CD O ) k. CO CD N CO u £ UJ co CM O O o o CO cri o o o o r v cri o o o o CD cri o o o o •«* cri o o o o CO cri o o o 0 o-1 cri o o o o CO cri I CM I CO o o o o CO cri o o o o in cri o o o o •* cri o o o o co cri o o o o CM cri o o o o cri o o o o o cri 5 8 CO N T3 CD CD T J C co c O) 'co CD T J C CO > CO o w •55 o « 8 O CO •> CD Q co 3 O) X CO < O ) X CO Jffl < CO TJ c CO CO 150 Appendix c 03 CO in • X TD c © , < CO c D) "55 CD Q < > O TJ CD X CO • o co c (0 CD (0 c "5> CO 2 TJ C CO a> O CM • II To E co <D N CO o UJ .2 co CM CM O 0 01 o o 00 01 0 co 01 0 CM 01 0 s 01 co .c if CO m 0 s 01 if) II w TJ .S co E H _ 3 TJ CM O 0 CO 01 co ,c §> co CD N CO +-» O UJ CD CO CM O 0 in 01 0 m 01 0 in co 01 0 LO LO 01 0 io 01 0 in co 01 0 LO CM 01 03 CO X CM a. 3 s a. 3 o .S *P CM * II T3 — CO CO E CO CD N CO UJ CO .S CM CM O O o> ^ in • II T3 03 . C 8* <P o 0 CO 01 co o 0 r-01 o 0 CO 01 03 8> CO .2 » "5 .S CM 0) k . CD N 00 u « UJ CD CM O 0 LO 01 0 LO 01 o 0 LO 01 0 LO CM 01 co .c CO X CM Q. 3 g Q. 3 o a .5 03 CO LO ° 0 <n II .{2 co CD E> CM CO CD N CO To UJ o 0 CM 01 o 0 CO 01 5 loS 0 CO CM 01 0 CM 01 o $ CO o o 00 00 I CM co a 3 o o 03 8* 00 • II CD co £?!§ CD N CO CM 00 u UJ o 0 CM 01 o 0 CO 01 CM o 0 CM 01 o o CO CO 1.8 8> 03 a. 3 o a CD N 'to t3 CD 3= CD T J c ca c S> CO CD T J C CD > ca o _co "55 O "CO o § > CD Q CO TJ c CO 00 151 Appendix .8 8* cn co TJ CD 3 CO c O ) 'co <D Q < > O TJ CD X i f o .5 o CO c CS CD CO .S LO O o C O X TJ c 0), Q-Q. < CO c '5> i _ CO 2 TJ C CO © O CM • II 75 E co CD N CO u CO CM CM O O O 0 01 o o o 0 CO 01 o 0 LO r-» 01 o o 0 io r-^  01 o 0 LO CM 01 o o o 0 r» 01 o 0 LO co 01 o o 0 LO CO 01 o 0 LO CM CO 01 o o o 0 CO 01 co 8> 03 o> X CM Q. 3 o O .8 cB1 c cn CO co tn .S LO i f) • II TJ ¥ 3 TJ CD CO CM CD N CO 0 1 UJ CM Q. 3 2 o o o o 0 LO 01 o o 0 LO 01 0 io 8 01 o 0 io CM CO 01 0 io CM CO LO 01 o o o 0 LO 01 0 LO 01 o o LO Co CO cri 0 LO CM CO 01 o o o LO CM cri co .c 8> to If 03 cn co CO at .S to I-00 • II 33 CD O ) k . CO CD N CO o £ UJ co CM o o o LO o> led o o o LO CO loo o o o 0 CM 01 o o o 0 CO 01 o o o 0 10 01 o o o 0 T t 01 o o o 0 CO 01 o o o 0 CM 01 o o o 0 01 o o o o co o o o o 0 01 o o o o cn co o o o o 00 00 CM 03 .c 8> l- l Q. 3 2 o CD N T3 CD 3= CD "O c CO c s> CO CD TJ c CD > D) CO O J2 "CD u "CO k . £ CO > CD Q CO TJ C CO CO Potvin '96 152 References Appendix 5.0 Function Used To Compute Effect Size (d) For Tests of Interaction in Two-Way (A x B) ANOVA Designs 2 x K designs: For designs having 2 levels on one factor (A) and 3 or more on the other (B), effect size, d, was calculated using, d _ l(m - m )MAX- 0*i-m)*w)| a where — M-2 represents the maximum of the differences between Al and A2 cell means among the B levels and (m - \i2) is the minimum of the differences between Ai and A2 cell means among B levels. The numerator entails taking the absolute difference between maximum and minimum values and dividing by a, the average within-cell standard deviatioa 3 x K Designs: For designs having 3 levels on one factor (A) and 3 or more on the other (B), d was calculated as above but since three values for the numerator term of this function are possible (one for differences between Ai and A2 cell means, another for differences between Ai and A3 and yet another for those between A2 and A3 ), only the one yielding the largest number (largest absolute difference) was used to compute effect size. Potvin '96 153 Appendix c CD CO CD k_ Q . CD k_ O JO co _. CO CO CO E CO r- © o§ jo co CD E i - CD 8 § = co co c - CO CD O II o co CD co o — CO co > CD T> O E >> CO • CD C o CD >% Q . Q . CO >» c o CO CD O 'iZ E TJ C CD 8 8 CO o o d d d o o «-8 8 8 R O CM ^ (O o o d o d d d ^ " § 8 I 8 ° ° -d d d ci d ^ 1 0 8 2 CO f— O O O O »-S 8 S 8 CO O CO o d d d ^ O O CI c> i d 6 -U CO CO c o CO c 55 a> Q < > O 8 8 d >-' T — CM CO 18 3 8 d d CM to to I I o> o d ^ i I r - CM n ^ io to 1 0 § 8 8 8 8 8 ^ t ^ o d d d d d y^ d d d d ' 8 8 8 8 d d d ^ i I O) o d ^ r CM n ^ in to O O O O O y-1 0 § 8 § S § co co oo co o d d d d ^ 0 0 0 * -O L L « CO CD O CO x\ c II li c o « CD k . k . O o £ CM g 1 -^ 1 II o CD co re II o * < y- CM CO II •2. i « c a CO g II o CD D> 2 CD > < u o o •92. g CM O g g 8 8. nj o Q. s CO g II o y- CM CO l O CO CD s CD 3 T - CM CO l O CO Potvin '96 154 I I Appendix 0 0 0 0 0 ' - ' - T > 5 d d d d d d d d * -§ P. 8 8 8 8 8 8 (0 IO S N N S O d d d d d d d d § § § § § § § § d d d d d d d d O O O O O O O Q O S S S N N S S d d d d d d d g 8 § | | | § ti ti ti ti ti ti ^ N N N 00 CO CO U ti 6 6 d d ti W 8 § 8 o o o o o *-* 8 8 R 8 8 8 1^ S 00 00 CO o d d d d d d d d d ^ § oo oo § § d d d d § § g § d d d "* R 8 S 8 CO CO oi o o o d S d o -I « -•o I d n I 1 8 8 8 00 O) o d d ^ o o 3 8 d ••-r i t i n ^ i a i s s c o a i U cc tn c o> tn o> a < > o o L i . * tn a> u (0 c o u * O O O O O O O O i -d d d d d d d r ^ d d d d d d ^ o o o o o d d d d ' 8 8 8 8 T> T * T * © d d d ^ II O O r -f N 5 9 £ % 8 -(0 o *-Q. c o a 8 • o II <-> to s > < O O O O O O O O * -0 8 8 8 8 CO CO GO GO d o d o 8 § C O o d ^ o o o o o O r -O O O O O T-d d d d w ' 8 8 8 8 5 oo oo S d d d W § co § II d d T-' C N O O g 8 8 c a oo c r N B ^ I O I O N I t O I CD 2 CD Potvin '96 c CD CO CD & o _co CO CO CO CO E co I- © 5 Q W co CD E 8 CD O — CO CO ~ . CO T i O cb co o CD ci .E co CO > CD O E >. CO CD C o CD Q. Q. CO .>» C o co CD k-Io E T J C CD 155 Appendix O O O O O O O O ' - o o o o o o O O O O O O O " -oo g o o o o o o o » -o o o o o o O O O O O O T-cq II I 6 d d 6 d ^ O O O O i -II 3 -< o o o e © ••-o d d d o o o "- O O O " -C N • CD X T5 c ® , < (0 C o> 'co CD Q < > O z < 2 CC i 1 o $ 1-k. o LL * CO CD CO O X CO CO 2 c o lat • • c CD cn >_ 'co O CD O Q II £ CO < CO II 5 < II < II < II 8 8 8 8 8 8 8 8 8 d d d d d d d d " - ' d o d o ' d o d ^ 8 8 8 8 8 8 8 ? * 3 * ^ ? s 0 0 0 0 0 0 ^ d 0 0 0 0 ^ o • o o o »-O O O " -d d ^ k ? < II jjp co < II 2 < II 2 < CO II < 8 I 8 •<t s o d d 0 0 0 0 8 § 8 l i t o d d § 8 8 s •* o d d s 00 II k ? » < o d d 8 1 8 t 8 t o d d 1 8 8 5 « < o d d 8 § o d ^ 8 09 d o o d ^ r - c M n t i n i D S c o o i CD S co CO CO CD o co E CD 0 c .CO "C CO > 8 1 CD O c .55 \ _ co > c CD CO CD Q . CD i _ O tn co co CD O co E c O « CO 8 "cO Q CO CO o c CO « Potvin '96 156 Appendix T J CD 3 C O O C M C D X T J c ® , < d d d d d d d o ' 0 0 0 0 0 0 0 6 6 ^ d d d d d d d d d d d d d d d d y -0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 " -0 0 0 0 0 0 0 0 0 0 0 0 0 0 ^ -d d d d d d d d d d d d d ' -2 5 S S § § S § S § § S § 8 I CO CO C J O O O O O O O O O O O Q CO » - * ^ - ^ * ^ , ' * * ' * * ^ , * o CD d d d d d d d d d d d - r ^ o O O O O O O O O O O O O " -co E CD o c .2 cz CO O O O O O O O O O " - > ° > § § § § § § § § 8 o m d d d d d d d d * -C O ) •55 CD CO - § § 0 0 § 0 0 0 0 § g d d d d d d d d d d r -° 9 9 3 3 3 9 9 ' 3 9 8 0 0 0 0 0 0 0 " - c C D CD Q . ^ d d d d o ' d x - C D o z o < " S S 5 S S 8 75 ^ . o ' d d d d " - w * II § (5 <D d o o o ^ c ; 5 < § 5 6 * § § § 8 8 L L §> d o - -jg CO ^ CD C O N § 8 11 O x - d - Q C • CO CO 0 0 1 1 CD 2 ^ - 8 . g = I » o < « i - w n i t i o i o s t o o i O ' - N n ^ i n i o S f l O CD C3) £ '55 O CD O Q Potvin'96 157 Appendix TJ <L> C o u CM CO X TJ C o>, Q. Q-< II » § S § § § § § g g , ? § § g g g g g 8 o d o o o d d d d d d d o o o o o ^ ^ o ^ o o o o g g g o o g g g g o g 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 — < e § § ° . S S S S S § § § § g g g 8 d d d d d d d d d d d d d d d — ddddddddddddoo^-0 0 0 0 0 0 0 0 0 0 0 0 0 — 2 § § § § § § § § § § § § 8 £ § ° . § S S S g g g 8 8 8 CD d d d d d d d d d o ' d d — 3 CO ca CO -. -. - . - . - . - . - . - . CD O O O O O O O O O O O — o 0 0 0 0 0 0 0 0 0 0 — CD C m ca 0 0 0 0 0 0 0 0 0 — > 8 o > g g g g g g g g g g W d d d d d d d d — C _ CO 0 1 c o g g g g g g g g 5 CO a> • CO d d d d d d d — C CD CD Q. < ^ o g g g o o g > d d 0 d d d — CD 0 ^ I ^ S S S S S 8 co 1 s >, ^5 " S S 8 8 8 I (Q CD d o d o — ef =F < s 6 * 8 8 S 8 d d d — O 8 CO U- © 0 0 — co < g » "88 j- "co O Q Potvin «96 1 5 g 1 Appendix TJ a> o u CM CO X TJ C ® , < < o d o d o d d o d o o d o o d o d — £ 9 9 9 9 8 9 9 9 9 9 8 9 9 9 9 9 8 d d d d d d d d d d d d d d d d ' -5 2 3 3 3 8 3 3 3 3 3 8 3 3 3 3 3 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 " -£ 3 3 8 3 0 3 3 3 3 3 3 3 ° - 3 8 d d d d d d d d d d d d d d " -2 § g § § § § § g § § i 5 § 8 o d d 0 0 0 0 0 0 0 0 0 0 ' " -2 8 9 9 9 9 9 8 9 9 9 9 9 8 | CO co CO CD O O O O O O O O O O O " - O To - o o o o g o o o o o g E dddddddddd-r- CD c co *c co 0 0 0 0 0 0 0 0 0 0 0 " -C M O Q O Q Q O O O O O O O S 9 9 9 8 9 9 9 3 3 8 O O O O O O O O O " - > 8 ° > 3 3 8 3 3 3 3 3 8 CD (j) ddoddddd-^ Cj • ) CO 0 0 0 0 0 0 0 0 0 0 > M ) • ^ C O ^ ^ t t ' t O ^ JJC dddddddi-1 6 " 3 9 9 8 O O O " ~ o ^ ro 9 9 8 U_ g> o d T-" CD < ^ 8 3 5 3 9 3 8 ® w -. - . Q . ^ O O O O O O T - CD O o < • 3 3. 3. 3 3 8 I ^ d o ' o d d * - to | » 8 CO fl) o d d o "- fi c O a P 8 CO CD co ^ 9 8 11 -— X co ° - Q i_ • co m W I" CD c > CO o < * 53 r ( \ i n ^ i n » s o c i O ' - N ( | i * i t ) * N s i CD O ) O CD O Q . Potvin '96 159 Appendix 2 8 8 d 6 s s s § d o d o 8 8 d d 8 8 d d 8 8 8 8 3 3 8 8 d d d d d d d — t 8 S d d 8 3 8 8 d o d o 3 3 d d 8 3 d d 3 8 3 8 3 8 8 d d d d d d d 2 3 3 d d 8 8 8 8 d o d o 3 3 d d 3 3 d d 8 3 3 3 3 8 d d d d d — 52 3 8 d d 8 3 8 3 d d d d 3 3 d d 3 3 d d 8 • 8 8 8 8 d o d o — 2 3 3 d d 8 8 8 3 d o d o 3 8 d d 8 8 d d 3 8 8 8 o d d — TJ <D 3 C • 5 C O o CM CD X TJ C o>, Q-Q. < CO c o> '55 CD Q < > o CO • o o u. To w c o J S c CD O) t CO O CD O Q II I I * CO CD CD ~ X CO II 2 < 2 8 8 d d d d - 8 8 d d 2 8 3 d d • 3 8 d d 0 0 8 8 d d N 8 8 d o" 1 0 8 3 d d 1 0 8 8 d d "88 d d 8 3 8 3 d d d d 8 8 GO 8 d d d d 8 3 8 8 d d d d 8 3 8 8 o" d d d 8 8 8 8 d d d d 3 8 8 8 d d d d 8 8 d d oo GO d d 8 8 d d 8 3 d d 3 3 d d 8 8 d — 3 3 d d 3 8 d d 3 3 d d 3 8 d — 8 3 8 d o — 8 8 d — 8 8 § cB 8 § d o d o § § 8 8 d d d ^ 3 3 8 d o — 8 8 d — 8 "88 d d d — - 8 8 N n ^ l Q I D S D O l O ' - N n ^ IO <o s H CD CO co CO CD o E CD O c CO co > 8 CD o c .2 CO > CD CO D a £ o _co CO CO CD p CO E c o CO CD k_ 8 CO II a CO CD o c CO 160 Appendix CO 6 2 3 C <3 C o u CM • CD X T5 c o> Q . Q . < N § § § § § § § § § § § § § § § § § § § § § § § § ? § 8 C M d d d d o ' o ' d d d d d d d d d d d d d d d d d d d d — § § § § § § § § § § § § § § § § § § § i § § § § § 8 c v i d d o ' d d d d d d d d d d d o ' d d o ' d o ' d d d o o -O J o d d d d d d d d d O 0 0 0 0 0 0 0 0 0 0 0 0 0 — , § § § § § § § § § § § § § § § § § § § § ? § § 8 C M O O O O O O O O O O O O O O O O O O O O O O O — c v i d o o d d d o o o o o o o o o o 0 0 0 0 ® 0 ^ N § § i § § § § § § § § § § § § § § § § § § 8 c ^ d d d o d d d d d d d d d d d o d d o d d ' -^ d d d d d d d d 0 0 0 0 0 0 0 0 0 0 0 0 — 8 § § § § § § § § § § § § § § § § § S ? 8 d d d o d d d d d d d d d d d d d d d — , § § § § § § § ? § § § § § § § § § § 8 — 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 — 0 § § § § § § S S § § § § i § S § S 8 — d d d d d d d d d d d d d d d d d — . N § § § § § § § § § § 5 § § S 8 ? — d d d d d d o o d d d d d d d d — m „ § § § § § § § § § § ? § 3 § § 8 « ^ddodddddociddcioci-r- g — d d o ' o ' d d d d d d d d d d — E ^ o g g g g g o g g g g g g g o ' - d d o ' d d d d d d d d d d — TO n § 5 § § ? § § § § § § § 8 5 0 0 0 0 0 0 0 0 0 0 0 0 — O Z o o o o o g g g g ^ " ' o ' d d o o d d d -• P J O II 1 0 6 6 6 6 -8 • CD O O ) — d d d d d d d d d d d — g Q 1 - 0 6 6 0 0 6 0 o ' o ' d « - > > 0 T ? § T ? ° ° ° § § ° 8 co _ — d d d d d d d d d — eg Q. CO o CO 11 co ^^dcicicidcici-^ w £ J o o o o o g g I > ^ d d d d d d - * j O „, S S S S 3 8 ^ > d 0 d d d — O ^ § S S § 8 CD 8 „ 9> S § § 8 ~ m 0 0 8 £ n d 0 - II d> o> O Q Potvin '96 161 Appendix co Q < > O z < « 0) • o S» II CD 0) 3 C c o o C M • CO X TJ c ® , Q-< x 53 d O O O O O O O O O O O O O O O O O O O O O O O O * — 8 3 o 5 d d d § d d d d d d § d d d d d d § o d d d d d d d d d d d d o o o d d d o d d d o o o d d d o d d d d d d d d d o d d d § o •* d d d s o d d d s s s d d d s s s e d d s o CD s d d d s s s d d d s s 8 © d s 8 d 8 ° ° ° § S S S S § § 8 8 > oo O O O O O O O O O Q O " • T ^ T f T t e o c o c o c o c o c o o CO "~ d d  d  d d d d d d d d d d CD S 8 co ° - E © ® ? a .2 'ST CO > 8 O •taw 1_ > c — — O O O O O O O O O T - CO Z . S S S S S S S S 8 S" d d ^ *-O S 8 co m " 0 0 0 0 0 0 0 * - OT >» »- © S ^ d - - - - - ^ CO • O O Q O O J? CD 8 * 0) co > fl) < w w w - CO o w S 8 co C M 1 1 O ^ CD £ .EP O Q Potvin '96 162 i Appendix to O Z < co at • o T3 CD 3 C C o o CM CD X T5 c © , Q. < * 0) CO > 3 § 3 5 3 3 S 3 o § 3 § § CM ci d d d d o o o e o d d d d d 0.40 0.40 § d § d o • * d o • * d o • * d d d d d d d d d i n 3 3 o - * § s § § o o - * CM d d d d d o o o o d d d d d d -» 3 9 3 3 CM d d d d d o o o o o o o o o o CO o - * s CM d d d d d o o o o o o o o o o CM § § § s o TJ- o § § § § o • * ° s ° •? CM d d d d d d o o o o d d d d d 3 § s 3 3 3 3 3 3 S 3 3 3 CM d d d d d o o o o o o o o o o 8 0.40 0.80 d § d o - * d d d d d d d d d d d o 3 3 3 3 3 3 3 S o • * -3-3-3-3 d d d d d o o o o o d d d d d GO 3 § 3 3 o S § § § § ° 3 d d d d d d d d o o o o o o o f - 5 § o •» o "0- o -<t 3 o co o • * o o - * ° s § 3 d d d d d d d d d d d d d d d CO o • * o ° s o -3 3 3 3 3 d d d d d d d d d d d d d d d U) 5 -3 3 3 o - * o co 3 3 3 3 o - * 3 3 3 8 d d d d d © o o o o d o d d - -it § § o •» s -3 o o • * o - * o •<* o 3 3 8 d d d d d d d d d d d d o - -CO § § 3 s § 3 -3 -3 § 3 8 d d d d d d d d d d d d — CM s 3 o • * o -3 -3 o • * 8 d d d d d d d d d d d *-3 s o - * § § o • * o •>» o - * § 8 d e d d d d d d d d o s 3 o • * o -» o o o 8 d d d d d d d d d 3 o • * o o -» 8 (Ji d d d d d d d d *-§ § § o - * o • * o •» 8 CO d d d d d d d *-3 § 3 § o - * 8 1 d d d d d d 3 § § 3 8 to d d d d d 3 ° o - * 3 8 to d d d d § § 8 d d d CO 0.40 0.40 8 O O O O O O O O O O O t -O O O O O O O O O O * -O O O O O O O O — o o o o o o o o - -o o o o o o o - -o o o o o o - -o o O O O - J -O O O O 1-O O O r-3 8 % 6 - to CO co E CD 0 c .2 co > 8 1 CD o O) - - o o o o o o o o o o o - - c 5) CD ^ - J S - j - t - J - J - J - j - J - J S co Q - - d d d d d d d d d d ^ > c CD < _ - - o o o o o o o o o - - jg Z . 3 3 3 3 3 3 3 3 8 g-o w II co ^ ^ ^ °- 75 j ^ o o o o o o o - - w CO p ^ w w w w w w — CO E c o o o  o o J5 ll 0 tw. 8 i "111 1 = CO i ° " « O *- o Is I -S •* O Q -Potvin '96 163 Appendix TJ CD 3 C "•P C o u CM a CO X TJ c o>, Q-< CO c CD "55 CD Q < > 0 z < 2 QC >» CO 1 o 5 K-k_ O L L * CO CD 0 'LZ § " 1 C CD O) t CO O CD O Q II 3 i < CO II CO II SI < s s 8 3 8 8 8 8 3 3 8 8 3 8 3 3 3 3 8 8 8 3 33 8 CM d d d d d d d d d o d d d d d d d d d d d d d d d CO s s 8 o GO 8 8 S 8 S S 3 8 S S S S 3 8 8 8 8 S. 3 3 8 ctj d d d d d d d d d o d d d d d d d d d d d d d d d 1/5 s § 8 o co 8 3 3 S 3 3 8 3 S S S 8 S 8 8 3 S S S 8 8 CM d d d d d d d d d d o d d d d d d d d d d d d d — Tt S 3 8 3 8 3 3 3 3 8 8 8 8 3 8 3 3 3 8 8 3 3 8 8 Cvi d d d d d d d d d d d d d d d d d d d d d d d — co s s 8 3 3 3 8 S 8 8 8 S S S 8 3 8 3 8 S 3 3 8 CM d d d d d d d d d d d d d d d d d d d d d d ^ CM § § 8 3 3 3 S S 3 8 8 8 8 3 S 8 S 8 3 S S 8 CM d d d d d d d d d d d d d d d d d d d d d - ^ -8 8 8 3 3 3 8 8 3 8 3 8 8 3 8 8 3 3 8 8 8 CM d d d d d d d d d d d d d d d d d d d d - ^ O 8 8 8 3 3 3 3 8 8 S S 8 3 S S S 8 3 3 8 CM d d d d d d d d d d d d d d d d d d d — CO 8 8 8 3 3 3 3 3 3 3 3 8 3 8 3 3 3 8 8 d d d d d d d d d d d d d d d d d d ^ CO 8 8 8 3 3 3 8 3 3 8 8 8 3 8 8 8 8 8 d d d d d d d d d d d d d d d d d ^ r- 8 8 8 3 8 3 8 8 8 8 8 8 8 8 8 8 8 d d d d d d d d d d d d d d d d - ^ 1 CO 8 8 8 8 8 8 8 8 3 8 8 3 3 8 3 8 d d d d d d o d d d d d d d d — ' LO 8 8 8 3 8 8 8 3 8 8 8 8 8 8 8 d d d d d d d d d d d d d o — Tt 8 8 8 8 8 8 8 8 8 3 8 8 8 8a d d d d d d d d d d d d d - ^ co 8 8 8 3 3 3 8 8 3 8 8 8 8 — e> d d d d d d d d o d d — CM 8 8 8 3 3 o GO 8 3 8 8 8 8 d d d d d d d d d d d 8 8 8 8 8 3 o O Q O Q 0 0 CO GO CO O i - d d d d d d d d d d O 8 8 8 o oo 3 3 3 3 8 8 d d d d d d o d d — 8 8 8 3 3 3 8 3 8 Oi d d d d d d d o — 8 8 3 o GO 3 8 8 8 CO d d d d d d d — 8 8 o GO o GO 3 8 8 d d d d d d 8 8 3 3 8 8 CD d d d d d *-8 8 3 3 8 IO d d d d 8 8 3 8 Tt d d d i-CO 8 8 d d 8 8 8 d — 8 — C M C O T t l O t O t - - 0 0 O > ° — CD CO CO CO CD o 73 E CD o C eg 'CZ CO > 8 • V o c .2 'CZ co > c CD CO CD w. Q . CD i— 0 _co co CO 0) 1 co E c o re g> 8 75 II O CO CD £ CO C M c O T r i o < 0 i * ^ c o o > o — C M e O T i - w c p h -— — — — — — — — CM C M C M C M C M C M C M C M 164 Appendix Appendix 7.1 Effect Size and Noncentrality Parameter Values For One-Way RM A N O V A Designs (constant r matrix pattern only) Effect K = 3 K = 6 K = 9 Size (d) Ave r n f NCP f NCP f NCP 0.2 0.4 5 0.105 0.167 0.088 0.233 0.083 0.312 10 0.105 0.333 0.088 0.467 0.083 0.625 15 0.105 0.500 0.088 0.700 0.083 0.937 20 0.105 0.667 0.088 0.933 0.083 1.250 25 0.105 0.833 0.088 1.167 0.083 1.562 30 0.105 1.000 0.088 1.400 0.083 1.875 0.8 5 0.183 0.500 0.153 0.700 0.144 0.937 10 0.183 1.000 0.153 1.400 0.144 1.875 15 0.183 1.500 0.153 2.100 0.144 2.812 20 0.183 2.000 0.153 2.800 0.144 3.750 25 0.183 2.500 0.153 3.500 0.144 4.687 30 0.183 3.000 0.153 4.200 0.144 5.625 0.5 0.4 5 0.264 1.042 0.220 1.458 0.208 1.953 10 0.264 2.083 0.220 2.917 0.208 3.906 15 0.264 3.125 0.220 4.375 0.208 5.859 20 0.264 4.167 0.220 5.833 0.208 7.812 25 0.264 5.208 0.220 7.292 0.208 9.766 30 0.264 6.250 0.220 8.750 0.208 11.719 0.8 5 0.456 3.125 0.382 4.375 0.361 5.859 10 0.456 6.250 0.382 8.750 0.361 11.719 15 0.456 9.375 0.382 13.125 0.361 17.578 20 0.456 12.500 0.382 17.500 0.361 23.437 25 0.456 15.625 0.382 21.875 0.361 29.297 30 0.456 18.750 0.382 26.250 0.361 35.156 0.8 0.4 5 0.422 2.667 0.353 3.733 0.333 5.000 10 0.422 5.333 0.353 7.467 0.333 10.000 15 0.422 8.000 0.353 11.200 0.333 15.000 20 0.422 10.667 0.353 14.933 0.333 20.000 25 0.422 13.333 0.353 18.667 0.333 25.000 30 0.422 16.000 0.353 22.400 0.333 30.000 0.8 5 0.730 8.000 0.611 11.200 0.577 15.000 10 0.730 16.000 0.611 22.400 0.577 30.000 15 0.730 24.000 0.611 33.600 0.577 45.000 20 0.730 32.000 0.611 44.800 0.577 60.000 25 0.730 40.000 0.611 56.000 0.577 75.000 30 0.730 48.000 0.611 67.200 0.577 90.000 f = Cohen's f NCP = Noncentrality Parameter Potvin '96 165 Appendix 7.2 Effect Size and Noncentrality Parameter Values For a Two-Way 2(G) x 3(T) Mixed A N O V A Design (constant r matrix pattern only). Test Effect Group (G) Trials (T) Group x Trials Size(d) Aver n f NCP f NCP f NCP 0.2 0.4 5 0.075 0.167 0.105 0.333 0.075 0.083 10 0.075 0.333 0.105 0.667 0.075 0.167 15 0.075 0.500 0.105 1.000 0.075 0.250 20 0.075 0.667 0.105 1.333 0.075 0.333 25 0.075 0.833 0.105 1.667 0.075 0.417 30 0.075 1.000 0.105 2.000 0.075 0.500 0.8 5 0.062 0.115 0.183 1.000 0.129 0.250 10 0.062 0.231 0.183 2.000 0.129 0.500 15 0.062 0.346 0.183 3.000 0.129 0.750 20 0.062 0.462 0.183 4.000 0.129 1.000 25 0.062 0.577 0.183 5.000 0.129 1.250 30 0.062 0.692 0.183 6.000 0.129 1.500 0.5 0.4 5 0.186 1.042 0.264 2.083 0.186 0.521 10 0.186 2.083 0.264 4.167 0.186 1.042 15 0.186 3.125 0.264 6.250 0.186 1.562 20 0.186 4.167 0.264 8.333 0.186 2.083 25 0.186 5.208 0.264 10.417 0.186 2.604 30 0.186 6.250 0.264 12.500 0.186 3.125 0.8 5 0.155 0.721 0.456 6.250 0.323 1.563 10 0.155 1.442 0.456 12.500 0.323 3.125 15 0.155 2.163 0.456 18.750 0.323 4.688 20 0.155 2.885 0.456 25.000 0.323 6.250 25 0.155 3.606 0.456 31.250 0.323 7.813 30 0.155 4.327 0.456 37.500 0.323 9.375 0.8 0.4 5 0.298 2.667 0.422 5.333 0.298 1.333 10 0.298 5.333 0.422 10.667 0.298 2.667 15 0.298 8.000 0.422 16.000 0.298 4.000 20 0.298 10.667 0.422 21.333 0.298 5.333 25 0.298 13.333 0.422 26.667 0.298 6.667 30 0.298 16.000 0.422 32.000 0.298 8.000 0.8 5 0.248 1.846 0.730 16.000 0.516 4.000 10 0.248 3.692 0.730 32.000 0.516 8.000 15 0.248 5.538 0.730 48.000 0.516 12.000 20 0.248 7.385 0.730 64.000 0.516 16.000 25 0.248 9.231 0.730 80.000 0.516 20.000 30 0.248 11.077 0.730 96.000 0.516 24.000 f = Cohen's f NCP = Noncentrality Parameter Potvin '96 Appendix Appendix 7.3 Effect Size and Noncentrality Parameter Values For a Two-Way 2(G) x 6(T) Mixed ANOVA Design (constant r matrix pattern only). Test Effect Group (G) Trials (T) Group x Trials Size(d) Aver n f NCP f NCP f NCP 5 0.058 0.200 0.088 0.467 0.062 0.117 10 0.058 0.400 0.088 0.933 0.062 0.233 15 0.058 0.600 0.088 1.400 0.062 0.350 20 0.058 0.800 0.088 1.867 0.062 0.467 25 0.058 1.000 0.088 2.333 0.062 0.583 30 0.058 1.200 0.088 2.800 0.062 0.700 5 0.045 0.120 0.153 1.400 0.108 0.350 10 0.045 0.240 0.153 2.800 0.108 0.700 15 0.045 0.360 0.153 4.200 0.108 1.050 20 0.045 0.480 0.153 5.600 0.108 1.400 25 0.045 0.600 0.153 7.000 0.108 1.750 30 0.045 0.720 0.153 8.400 0.108 2.100 5 0.144 1.250 0.220 2.917 0.156 0.729 10 0.144 2.500 0.220 5.833 0.156 1.458 15 0.144 3.725 0.220 8.750 0.156 2.187 20 0.144 5.000 0.220 11.667 0.156 2.917 25 0.144 6.250 0.220 14.583 0.156 3.646 30 0.144 7.500 0.220 17.500 0.156 4.375 5 0.112 0.750 0.382 8.750 0.270 2.188 10 0.112 1.500 0.382 17.500 0.270 4.375 15 0.112 2.250 0.382 26.250 0.270 6.563 20 0.112 3.000 0.382 35.000 0.270 8.750 25 0.112 3.750 0.382 43.750 0.270 10.938 30 0.112 4.500 0.382 52.500 0.270 13.125 5 0.231 3.200 0.353 7.467 0.249 1.867 10 0.231 6.400 0.353 14.933 0.249 3.733 15 0.231 9.600 0.353 22.400 0.249 5.600 20 0.231 12.800 0.353 29.867 0.249 7.467 25 0.231 16.000 0.353 37.333 0.249 9.333 30 0.231 19.200 0.353 44.800 0.249 11.200 5 0.179 1.920 0.611 22.400 0.432 5.600 10 0.179 3.840 0.611 44.800 0.432 11.200 15 0.179 5.760 0.611 67.200 0.432 16.800 20 0.179 7.680 0.611 89.600 0.432 22.400 25 0.179 9.600 0.611 112.000 0.432 28.000 30 0.179 11.520 0.611 134.400 0.432 33.600 f = Cohen's f NCP = Noncentrality Parameter 167 Appendix Appendix 7.4 Effect Size and Noncentrality Parameter Values For a Two-Way 2(G) x 9(T) Mixed A N O V A Design (constant r matrix pattern only). Test Effect Group (G) Trials (T) Group x Trials Size(d) Aver n f NCP f NCP f NCP 0.2 0.4 5 0.049 0.214 0.083 0.625 0.059 0.156 10 0.049 0.429 0.083 1.250 0.059 0.312 15 0.049 0.643 0.083 1.875 0.059 0.469 20 0.049 0.857 0.083 2.500 0.059 0.625 25 0.049 1.071 0.083 3.125 0.059 0.781 30 0.049 1.286 0.083 3.750 0.059 0.937 0.8 5 0.037 0.122 0.144 1.875 0.102 0.469 10 0.037 0.243 0.144 3.750 0.102 0.938 15 0.037 0.365 0.144 5.625 0.102 1.406 20 0.037 0.486 0.144 7.500 0.102 1.875 25 0.037 0.608 0.144 9.375 0.102 2.344 30 0.037 0.730 0.144 11.250 0.102 2.813 0.5 0.4 5 0.122 1.339 0.208 3.906 0.147 0.977 10 0.122 2.679 0.208 7.812 0.147 1.953 15 0.122 4.018 0.208 11.719 0.147 2.930 20 0.122 5.357 0.208 15.625 0.147 3.906 25 0.122 6.696 0.208 19.531 0.147 4.883 30 0.122 8.036 0.208 23.437 0.147 5.859 0.8 5 0.092 0.760 0.361 11.719 0.255 2.930 10 0.112 1.520 0.361 23.438 0.255 5.859 15 0.112 2.280 0.361 35.156 0.255 8.789 20 0.112 3.041 0.361 46.875 0.255 11.719 25 0.112 3.801 0.361 58.594 0.255 14.648 30 0.112 4.561 0.361 70.313 0.255 17.578 0.8 0.4 5 0.195 3.429 0.333 10.000 0.236 2.500 10 0.195 6.857 0.333 20.000 0.236 5.000 15 0.195 10.286 0.333 30.000 0.236 7.500 20 0.195 13.714 0.333 40.000 0.236 10.000 25 0.195 17.143 0.333 50.000 0.236 12.500 30 0.195 20.571 0.333 60.000 0.236 15.000 0.8 5 0.147 1.946 0.577 30.000 0.408 7.500 10 0.147 3.892 0.577 60.000 0.408 15.000 15 0.147 5.838 0.577 90.000 0.408 22.500 20 0.147 7.784 0.577 120.000 0.408 30.000 25 0.147 9.730 0.577 150.000 0.408 37.500 30 0.147 11.676 0.577 180.000 0.408 45.000 f = Cohen's f NCP = Noncentrality Parameter 168 i Appendix oo b 350 5678 feet Size ( 0.5 112233 546 ui CM d 38? 876773 | Ave r 00 6 d cn d) I 0.8 I 86 500111 3785 1122345 feet Size ( 0.5 10008 177 463 538 UI CM d 233 92336 99 7562 r Matrix •* op •»* 00 4 4 co en X co o CO « TJ c Q-< 0) c o 3 E CO O k_ CO O o c l i ll TJ CD CO 3 (0 CD CO O (0 o U TJ C CD k . < > o Z < 2 CC >» CO CD C O co d feet Size ( 0.5 | 10000 11234 UI CM d 44198 19 | Ave r • * 00 d d CO oo d 435567 3 feet Size ( 0.5 | 7878 431 UI CM d • 3131 91123 I Ave r •er oo d d CO < > o z < >» co o 5 CO d 14 3435 333 1673435 feet Size ( 0.5 | 882688 89329 100000 4 UI CM d 931 804 54673 7912 r Matrix • * 00 •* op rt •«* ob 00 co X CO co d 1999223 14 666 223 feet Size ( 0.5 63 559221 41234 48765 UI CM d 6699 19 2337 301588 r Matrix •cf op •* 00 i t • * CO 00 CO X CO Potvin '96 169 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.831.1-0077309/manifest

Comment

Related Items