Basic terminology in sampling 4 Basic terminology in statistical estimation 7 Basic terminology in testing statistical hypotheses 10 Testing hypotheses on means—a assumed known . 14 Testing hypotheses on means—a unknown 21 Testing hypotheses about the difference between two means—assuming homogeneity of variance 26 Computational formulas for the t statistic 35 Test for homogeneity of variance 37 Testing hypotheses about the difference between two means—assuming population variances not equal 41 Testing hypotheses about the difference between two means—correlated observations 44 Combining several independent tests on the same hypothesis 49 Outliers and winsorized t statistic . 51 Multivariate analog of test on differences between two means— Hotelling's T2 '54 vii
Linear model—no distribution assumptions. Linear model^estimation in univariate case Linear model—multivariate case with distribution assumptions Correlations Dwyer and SWP algorithms for the inverse of a symmetric matrix Transformations yielding uncorrelated variables Two sets of predictors ' Testing statistical hypotheses—fixed model Regression of regression coefficients on supplementary variables
Introduction Definitions and numerical example Structural model for single-factor experiment—model I Structural model for single-factor experiment—model II (variancecomponent model) • '. . Methods for deriving estimates and their expected values Comparisons among treatment means Use of orthogonal components in tests for trend Use of studentized range statistic Alternative procedures for making a posteriori tests Comparing all means with a control Tests for homogeneity of variance . Unequal sample sizes Power and determination of sample size—fixed model Linear model with fixed variables Multivariate analysis of variance Randomized complete-block designs Some special features of the variance-component model Maximum-likelihood estimation and likelihood-ratio test • General principle in hypothesis testing Testing the hypothesis of equality of a subset of T3 (fixed model)
Chapter 4 SINGLE-FACTOR EXPERIMENTS HAVING REPEATED MEASURES ON THE SAME ELEMENTS 4.1 4.2 4.3 4.4 4.5 4.6
Purpose Notation and computational procedures Numerical example . • Statistical basis for the analysis Use of analysis of variance to estimate reliability of measurements Tests for trend
General purpose 309 Terminology and notation 311 Main effects * .316 Interaction effects 318 Experimental error and its estimation • 320 Estimation of mean squares due to main effects and interaction effects 321Principles for constructing F ratios 332 Higher-order factorial experiments 335 Estimation and tests of significance for three-factor experiments 343 Simple effects and their tests 347 Geometric interpretation of higher-order interactions 351 Nested factors (hierarchal designs) 359 Split-plot designs 366 Rules for deriving the expected values of mean squares 371 Quasi F ratios 375 Preliminary tests on the model and pooling procedures 378 Individual comparisons 384 Partition of main effects and interaction into trend components 388 Replicated experiments ' , 3 9 1 The case n = 1 and a test for nonadditivity 394 The choice of a scale of measurement and transformations 397 Unequal cell frequencies 402 Unequal cell frequencies—least-squares estimation. 404 Estimability in a general sense 422 Estimation of variance components ' 425 Estimation of the magnitude'of experimental effects 428
General purpose p x q factorial experiment having n observations per cell p x q factorial experiment—unequal cell frequencies Effect of scale of measurement on interaction p x q x 'r factorial experiment having n observations per cell Computational procedures for nested factors Factorial experiment with a single control group Test for nonadditivity Computation of trend components General computational formulas for main effects and interactions Missing data Special computational procedures when all factors have two levels
6.13 Illustrative applications 6.14 Unequal cell frequencies—least-squares solution 6.15 Analysis of variance in terms of polynomial regression
,
Chapter 7 MULTIFACTOR EXPERIMENTS HAVING REPEATED MEASURES • T H E SAME ELEMENTS 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8
General purpose Two-factor experiment with repeated measures on one factor Three-factor experiment with repeated measures (case I) Three-factor experiment with repeated measures (case II) Other multifactor repeated-measure plans Tests on trends • T e s t ' n g equality and symmetry of covariance matrices Unequal group size
Chapter 8 FACTORIAL EXPERIMENTS IN WHICH S O M E O F T H E INTERACTIONS ARE C O N F O U N D E D .8.1 General purpose ' 8.2 Modular arithmetic ' 8.3 Revised notation for factorial experiments 8.4 Method for obtaining the components of, interactions 8.5 Designs for 2 x 2 x 2 factorial experiments in blocks of size 4 8.6 Simplified computational procedures for 2k factorial experiments 8.7 Numerical example of 2 x 2 x 2 factorial experiment in blocks of size 4 8.8 Numerical example .of 2 x 2 x 2 factorial experiment in blocks of size 4 (repeated measures) 8.9 Designs for 3 x 3 factorial experiments 8.10 Numerical example of 3 x 3 factorial experiment in blocks of size 3 8.11 Designs for 3 x 3 x 3 factorial experiments 8.12 Balanced 3 x 2 x 2 factorial experiment in blocks of size 6 8.13 Numerical example of 3 x 2 x 2 factorial experiment in blocks of size 6 8.14 3 x 3 x 3 x 2 factorial experiment in blocks of size 6 8.15 Fractional replication . Chapter 9 LATIN SQUARES A N D RELATED DESIGNS
9.1 Definition of Latin square _• • 685 9.2 Enumeration of Latin squares 688 9.3 Structural relation between Latin squares a n d three-factor factorial experiments 691 9.4 Uses of Latin squares • 693 • 9.5 Analysis of Latin-square designs—no repeated measures 696 9.6 Analysis of Greco-Latin squares 709 9.7 Analysis of Latin squares—repeated measures 711
General purpose , Single-factor experiment Numerical example of single-factor experiment Factorial experiment Computational procedures for factorial experiment Factorial experiment—repeated measures Multiple covariates '
•.
752.
•
Appendix A RANDOM VARIABLES A.I A.2 A.3 A.4 A.5 A.6 A.7 A.8
Random variables and probability distributions Normal distribution Gamma and chi-square distributions Beta and F distributions • Student's t distribution Bivariate normal distribution Multivariate normal distribution Distribution of quadratic forms -
Appendix B TOPICS CLOSELY RELATED TO THE ANALYSIS OF VARIANCE B.I B.2 B.3 B.4
Kruskal-Wallis H test Contingency table with repeated measures Comparing treatment effects with a control General partition of. degrees of freedom in a contingency table.
Appendix C TABLES
752 755 775 781 787 796 809
813 814 822 824 828 834 837 839 845
848 848 849 854 855
' 860
C.I Unit normal distribution 861 C.2 Student's / distribution 863 C.3 ' F distribution • 865 C.3a F distribution (supplement) 868 C.4 Distribution of the studentized range statistic 870 C.5 Arcsin transformation 872. C.6 Distribution of t statistic in comparing treatment means with a control 873 C.7 Distribution of F m a x statistic 875 C.8 Critical values for Cochran's test for homogeneity of variance 876 C.9 Chi-square distribution . 877 C. 10 Coefficients of orthogonal polynomials 878
XU
C.ll C.I2 C.I3 C.14
CONTENTS
Curves of constant power for tests on main effects Random permutations of 16 numbers Noncentral t distribution Noncentral F distribution