✪ Basic Logic of the Analysis of Variance 311

✪ Carrying Out an Analysis of Variance 319

✪ Hypothesis Testing with the Analysis of Variance 327

✪ Assumptions in the Analysis of Variance 331

✪ Planned Contrasts 334

✪ Post Hoc Comparisons 337

✪ Effect Size and Power for the Analysis of Variance 339

✪ Controversy: Omnibus Tests versus Planned Contrasts 343

In Chapter 8, you learned about the t test for independent means, a procedure forcomparing two groups of scores from entirely separate groups of people (such asan experimental group and a control group). In this chapter, you will learn about a procedure for comparing more than two groups of scores, each of which is from an entirely separate group of people.

We will begin with an example. Cindy Hazan and Philip Shaver (1987) arranged to have the Rocky Mountain News, a large Denver area newspaper, print a mail-in survey. The survey included the question shown in Table 9–1 to measure what is called attachment style. (How would you answer this item?) Those who selected the first choice are “secure”; those who selected the second, “avoidant”; and those who selected

✪ Analyses of Variance in Research Articles 344

✪ Advanced Topic: The Structural Model in the Analysis of Variance 345

✪ Summary 351

✪ Key Terms 352

✪ Example Worked-Out Problems 353

✪ Practice Problems 357

✪ Using SPSS 364

✪ Chapter Notes 368

Introduction to the Analysis of Variance

Chapter Outline

CHAPTER 9

T I P F O R S U C C E S S This chapter assumes you under- stand the logic of hypothesis test- ing and the t test (particularly estimated population variance and the distribution of means). So be sure you understand the relevant material in Chapters 4, 5, 7, and 8 before starting this chapter.

IS B

N 0-558-46761-X

Statistics for Psychology, Fifth Edition, by Arthur Aron, Elaine N. Aron, and Elliot J. Coups. Published by Prentice Hall. Copyright © 2009 by Pearson Education, Inc.

Introduction to the Analysis of Variance 311

analysis of variance (ANOVA) hypothesis-testing procedure for studies with three or more groups.

the third, “anxious-ambivalent.” These attachment styles are thought to be different ways of behaving and thinking in close relationships that develop from a person’s ex- perience with early caretakers (Mikulincer & Shaver, 2007). (Of course, this single item is only a very rough measure that works for a large survey but is certainly not definitive in any particular person.) Readers also answered questions about various as- pects of love, including amount of jealousy. Hazan and Shaver then compared the amount of jealousy reported by people with the three different attachment styles.

With a t test for independent means, Hazan and Shaver could have compared the mean jealousy scores of any two of the attachment styles. Instead, they were interested in differences among all three attachment styles. The statistical procedure for testing variation among the means of more than two groups is called the analysis of variance, abbreviated as ANOVA. (You could use the analysis of variance for a study with only two groups, but the simpler t test gives the same result.)

In this chapter, we introduce the analysis of variance, focusing on the situation in which the different groups being compared each have the same number of scores. In an Advanced Topic section later in the chapter, we describe a more flexible way of thinking about analysis of variance that allows groups to have different numbers of scores. In Chapter 10, we consider situations in which the different groups are ar- rayed across more than one dimension. For example, in the same analysis we might consider both gender and attachment style, making six groups in all (female secure, male secure, female avoidant, etc.), arrayed across the two dimensions of gender and attachment style. This situation is known as a factorial analysis of variance. To em- phasize the difference from factorial analysis of variance, what you learn in this chap- ter is often called a one-way analysis of variance. (If this is confusing, don’t worry. We will go through it slowly and systematically in Chapter 10. We only mention this now so that, if you hear these terms, you will not be surprised.)

Basic Logic of the Analysis of Variance The null hypothesis in an analysis of variance is that the several populations being com- pared all have the same mean. For example, in the attachment style example, the null hypothesis is that the populations of secure, avoidant, and anxious-ambivalent peo- ple all have the same degree of jealousy. The research hypothesis would be that the degree of jealousy differs among these three populations.

Hypothesis testing in analysis of variance is about whether the means of the sam- ples differ more than you would expect if the null hypothesis were true. This question about means is answered, surprisingly, by analyzing variances (hence the name

Table 9–1 Question Used in Hazan and Shaver (1987) Newspaper Survey

Which of the following best describes your feelings? [Check one]

[ ] I find it relatively easy to get close to others and am comfortable depending on them and having them de- pend on me. I don’t often worry about being abandoned or about someone getting too close to me.

[ ] I am somewhat uncomfortable being close to others; I find it difficult to trust them completely, difficult to allow myself to depend on them. I am nervous when anyone gets too close, and often, love partners want me to be more intimate than I feel comfortable being.

[ ] I find that others are reluctant to get as close as I would like. I often worry that my partner doesn’t really love me or won’t want to stay with me. I want to merge completely with another person, and this desire sometimes scares people away.

Source: Hazan and Shaver (1987, p. 515).

IS B

N 0-

55 8-

46 76

1- X

Statistics for Psychology, Fifth Edition, by Arthur Aron, Elaine N. Aron, and Elliot J. Coups. Published by Prentice Hall. Copyright © 2009 by Pearson Education, Inc.

312 Chapter 9

analysis of variance). Among other reasons, you focus on variances because, when you want to know how several means differ, you are asking about the variation among those means.

Thus, to understand the logic of analysis of variance, we consider variances. In particular, we begin by discussing two different ways of estimating population vari- ances. As you will see, the analysis of variance is about a comparison of the results of these two different ways of estimating population variances.

Estimating Population Variance from Variation Within Each Sample With the analysis of variance, as with the t test, you do not know the true population variances. However, as with the t test, you can estimate the variance of each of the pop- ulations from the scores in the samples. Also, as with the t test, you assume in the analysis of variance that all populations have the same variance. This allows you to average the estimates from each sample into a single pooled estimate, called the within-groups estimate of the population variance. It is an average of estimates figured entirely from the scores within each of the samples.

One of the most important things to remember about this within-groups estimate is that it is not affected by whether the null hypothesis is true. This estimate comes out the same whether the means of the populations are all the same (the null hypothesis is true) or the means of the populations are not all the same (the null hypothesis is false). This estimate comes out the same because it focuses only on the variation inside each population. Thus, it doesn’t matter how far apart the means of the differ- ent populations are.

If the variation in scores within each sample is not affected by whether the null hypothesis is true, what determines the level of within-group variation? The answer is that chance factors (that is, factors that are unknown to the researcher) account for why different people in a sample have different scores. These chance factors include the fact that different people respond differently to the same situation or treatment and that there may be some experimental error associated with the measurement of the variable of interest. Thus, we can think of the within-groups population variance estimate as an estimate based on chance (or unknown) factors that cause different people in a study to have different scores.

Estimating the Population Variance from Variation Between the Means of the Samples There is also a second way to estimate the population variance. Each sample’s mean is a number in its own right. If there are several samples, there are several such num- bers, and these numbers will have some variation among them. The variation among these means gives another way to estimate the variance in the populations that the samples come from. Just how this works is a bit tricky; so follow the next two sec- tions closely.

When the Null Hypothesis Is True First, consider the situation in which the null hypothesis is true. In this situation, all samples come from populations that have the same mean. Remember, we are always assuming that all populations have the same variance (and also that they are all normal curves). Thus, if the null hypothesis is true, all populations are identical and thus they have the same mean, variance, and shape.

within-groups estimate of the popu- lation variance estimate of the vari- ance of the population of individuals based on the variation among the scores in each of the actual groups studied.

IS B

N 0-558-46761-X

Statistics for Psychology, Fifth Edition, by Arthur Aron, Elaine N. Aron, and Elliot J. Coups. Published by Prentice Hall. Copyright © 2009 by Pearson Education, Inc.

Introduction to the Analysis of Variance 313

However, even when the populations are identical (that is, even when the null hy- pothesis is true), samples from the different populations will each be a little different. How different can the sample means be? That depends on how much variation there is in each population. If a population has very little variation in the scores in it, then the means of samples from that population (or any identical population) will tend to be very similar to each other. When the null hypothesis is true, the variability among the sam- ple means is influenced by the same chance factors that influence the variability among the scores within each sample.

What if several identical populations (with the same population mean) have a lot of variation in the scores within each? In that situation, if you take one sample from each population, the means of those samples could easily be very different from each other. Being very different, the variance of these means will be large. The point is that the more variance within each of several identical populations, the more vari- ance there will be among the means of samples when you take a random sample from each population.

Suppose you were studying samples of six children from each of three large play- grounds (the populations in this example). If each playground had children who were all either 7 or 8 years old, the means of your three samples would all be between 7 and 8. Thus, there would not be much variance among those means. However, if each play- ground had children ranging from 3 to 12 years old, the means of the three samples would probably vary quite a bit.What this shows is that the variation among the means of sam- ples is related directly to the amount of variation in each of the populations from which the samples are taken. The more variation in each population, the more variation there is among the means of samples taken from those populations.

This principle is shown in Figure 9–1. The three identical populations on the left have small variances, and the three identical populations on the right have large vari- ances. In each set of three identical populations, even though the means of the popula- tions (shown by triangles) are exactly the same, the means of the samples from those populations (shown byXs) are not exactly the same. Most important, the sample means from the populations that each have a small amount of variance are closer together (have less variance among them). The sample means from the populations that each have more variance are more spread out (have more variance among them).

We have now seen that the variation among the means of samples taken from identical populations is related directly to the variation of the scores in each of those populations. This has a very important and perhaps surprising implication: it should be possible to estimate the variance in each population from the variation among the means of our samples.

Such an estimate is called a between-groups estimate of the population vari- ance. (It has this name because it is based on the variation between the means of the samples, the “groups.” Grammatically, it ought to be among groups, but between groups is traditional.) You will learn how to figure this estimate later in the chapter.

So far, all of this logic we have considered has assumed that the null hypothesis is true, so that there is no variation among the means of the populations. In this situ- ation, the between-groups estimate of the population variance (which reflects variabil- ity in the means of the samples) is influenced by the chance factors that cause different people in the same sample to have different scores. Let’s now consider what happens when the null hypothesis is not true, when instead the research hypothesis is true.

When the Null Hypothesis Is Not True If the null hypothesis is not true (and thus the research hypothesis is true), the populations themselves have different means. In this situation, variation among the means of samples taken from these

between-groups estimate of the population variance estimate of the variance of the population of individuals based on the variation among the means of the groups studied.

IS B

N 0-

55 8-

46 76

1- X

314 Chapter 9

populations is still caused by the chance factors that cause variation within the pop- ulations. So the larger the variation within the populations, the larger the variation will be among the means of samples taken from the populations. However, in this situation, in which the research hypothesis is true, variation among the means of the samples also is caused by variation among the population means. You can think of

(a) (b)

Figure 9–1 Means of samples from identical populations will not be identical. (a) Sam- ple means from populations with less variation will vary less. (b) Sample means from popula- tions with more variation will vary more. (Population means are indicated by a triangle, sample means by an X.)

IS B

N 0-558-46761-X

Introduction to the Analysis of Variance 315

(b)(a)

Figure 9–2 Means of samples from populations whose means differ (b) will vary more than sample means taken from populations whose means are the same (a). (Population means are indicated by a triangle, sample means by an X.)

this variation among population means as resulting from a treatment effect—that is, the different treatment received by the groups (as in an experiment) causes the groups to have different means. So, when the research hypothesis is true, the means of the samples are spread out for two different reasons: (1) because of variation in each of the populations (due to chance factors) and (2) because of variation among the population means (that is, a treatment effect). The left side of Figure 9–2 shows populations with the same means (shown by triangles) and the means of samples taken from them (shown by Xs). (This is the same situation as in both sides of Figure 9–1.) The right side of Figure 9–2 shows three populations with different means (shown by triangles) and the means of samples taken from them (shown by Xs). (This is the situation we have just been discussing.) Notice that the means of the samples are more spread out in the situation on the right side of Figure 9–2. This is true even though the variations in the populations are the same for the situation on both sides of Figure 9–2. This additional spread (variance) for the means on the right side of Figure 9–2 is due to the populations having different means.

In summary, the between-groups estimate of the population variance is figured based on the variation among the means of the samples. If the null hypothesis is true, this estimate gives an accurate indication of the variation within the populations (that is, the variation due to chance factors). But if the null hypothesis is false, this method of estimating the population variance is influenced both by the variation within the pop- ulations (the variation due to chance factors) and the variation among the population means (the variation due to a treatment effect). It will not give an accurate estimate of the variation within the populations because it also will be affected by the variation among the populations. This difference between the two situations has important

T I P F O R S U C C E S S You may want to read this para- graph again to ensure that you fully understand the logic we are presenting.

IS B

N 0-

55 8-

46 76

1- X

316 Chapter 9

implications. It is what makes the analysis of variance a method of testing hypothe- ses about whether there is a difference among means of populations.

Comparing the Within-Groups and Between-Groups Estimates of Population Variance Table 9–2 summarizes what we have seen so far about the within-groups and between- groups estimates of population variance, both when the null hypothesis is true and when the research hypothesis is true. When the null hypothesis is true, the within- groups and between-groups estimates are based on the same thing (that is, the chance variation within populations). Literally, they are estimates of the same population variance. Therefore, when the null hypothesis is true, both estimates should be about the same. (Only about the same; these are estimates). Here is another way of describ- ing this similarity of the between-groups estimate and the within-groups estimate when the null hypothesis is true: In this situation, the ratio of the between-groups estimate to the within-groups estimate should be approximately one to one. For ex- ample, if the within-groups estimate is 107.5, the between-groups estimate should be around 107.5, so that the ratio would be about 1. (A ratio is found by dividing one num- ber by the other; thus .)

The situation is quite different when the null hypothesis is not true. As shown in Table 9–2, when the research hypothesis is true, the between-groups estimate is influenced by two sources of variation: (a) the variation of the scores in each pop- ulation (due to chance factors) and (b) the variation of the means of the populations from each other (due to a treatment effect). Yet even when the research hypothesis is true, the within-groups estimate still is influenced only by the variation in the populations. Therefore, when the research hypothesis is true, the between-groups estimate should be larger than the within-groups estimate. In this situation, the ratio of the between-groups estimate to the within-groups estimate should be greater than 1. For example, the between-groups estimate might be 638.9 and the within- groups estimate 107.5, making a ratio of 638.9 to 107.5, or 5.94. In this example the between-groups estimate is nearly six times bigger (5.94 times to be exact) than the within-groups estimate.

This is the central principle of the analysis of variance: When the null hypothe- sis is true, the ratio of the between-groups population variance estimate to the within- groups population variance estimate should be about 1. When the research hypothesis is true, this ratio should be greater than 1. If you figure this ratio and it comes out much

107.5>107.5 = 1

T I P F O R S U C C E S S Table 9–2 summarizes the logic of the analysis of variance. Test your understanding of this logic by try- ing to explain Table 9–2, without referring to the book. You might try writing your answer down and swapping it with someone else in your class.

Table 9–2 Sources of Variation in Within-Groups and Between-Groups Variance Estimates

Variation Within Populations (Due to

Chance Factors)

Variation Between Populations (Due to a Treatment Effect)

Null hypothesis is true

Within-groups estimate reflects ✓

Between-groups estimate reflects ✓

Research hypothesis is true

Within-groups estimate reflects ✓

Between-groups estimate reflects ✓ ✓

IS B

N 0-558-46761-X

Introduction to the Analysis of Variance 317

greater than 1, you can reject the null hypothesis. That is, it is unlikely that the null hypothesis could be true and the between-groups estimate be a lot bigger than the within-groups estimate.

The F Ratio This crucial ratio of the between-groups to the within-groups population variance estimate is called an F ratio. (The F is for Sir Ronald Fisher, an eminent statistician who developed the analysis of variance; see Box 9–1.)

The F Distribution and the F Table We have said that if the crucial ratio of between-groups estimate to within-groups estimate (the F ratio) is a lot larger than 1, you can reject the null hypothesis. The next question is, “Just how much bigger than 1 should it be?”

F ratio ratio of the between-groups population variance estimate to the within-groups population variance estimate.

and proofs. Gosset said that when Fisher began a sentence with “Evidently,” it meant two hours of hard work before one could hope to see why the point was evident.

Indeed, his lack of empathy extended to all of hu- mankind. Like Galton, Fisher was fond of eugenics, fa- voring anything that might increase the birthrate of the upper and professional classes and skilled artisans. Not only did he see contraception as a poor idea—fearing that the least desirable persons would use it least—but he de- fended infanticide as serving an evolutionary function. It may be just as well that his opportunities to experiment with breeding never extended beyond the raising of his own children and some crops of potatoes and wheat.

Although Fisher eventually became the Galton Profes- sor of Eugenics at University College, his most influential appointment probably came when he was invited to Iowa State College in Ames for the summers of 1931 and 1936 (where he was said to be so put out with the terrible heat that he stored his sheets in the refrigerator all day). At Ames, Fisher greatly impressed George Snedecor, an American professor of mathematics also working on agri- cultural problems. Consequently, Snedecor wrote a text- book of statistics for agriculture that borrowed heavily from Fisher’s work. The book so popularized Fisher’s ideas about statistics and research design that its second edition sold 100,000 copies.

You can learn more about Fisher at the following Web site: http://www-groups.dcs.st-and.ac.uk/~history/ Biographies/Fisher.html.

Sources: Peters (1987); Salsburg (2001); Stigler (1986); Tankard (1984).

BOX 9–1 Sir Ronald Fisher, Caustic Genius of Statistics Ronald A. Fisher, a contem- porary of William Gosset (see Chapter 7, Box 7–1) and Karl Pearson (see Chapter 13, Box 13–1), was probably the brightest and certainly the most productive of this close-knit group of British statisticians. In the process of writing 300 pa- pers and seven books, he devel- oped many of the modern field’s key concepts: variance,

analysis of variance, significance levels, the null hypothe- sis, and almost all of our basic ideas of research design, in- cluding the fundamental importance of randomization.

A family legend is that little Ronald, born in 1890, was so fascinated by math that one day, at age 3, when put into his highchair for breakfast, he asked his nurse, “What is a half of a half?” Told it was a quarter, he asked, “What’s half of a quarter?” To that answer he wanted to know what was half of an eighth.At the next answer he purportedly thought a moment and said, “Then I suppose that a half of a six- teenth must be a thirty-toof.” Ah, baby stories.

As a grown man, however, Fisher seems to have been anything but darling. Some observers ascribe this to a cold and unemotional mother, but, whatever the reason, through- out his life he was embroiled in bitter feuds, even with scholars who had previously been his closest allies and who certainly ought to have been comrades in research.

Fisher’s thin ration of compassion extended to his read- ers as well; not only was his writing hopelessly obscure, but it often simply failed to supply important assumptions

Courtesy of the Library of Congress

IS B

N 0-

55 8-

46 76

1- X

318 Chapter 9

T I P F O R S U C C E S S These “How Are You Doing” ques- tions and answers provide a useful summary of the logic of the analy- sis of variance. Be sure to review them (and the relevant sections in the text) as many times as neces- sary to fully understand this logic.

How are you doing?

1. When do you use an analysis of variance? 2. (a) What is the within-groups population variance estimate based on? (b) How

is it affected by the null hypothesis being true or not? (c) Why? 3. (a) What is the between-groups population variance estimate based on?

(b) How is it affected by the null hypothesis being true or not? (c) Why? 4. What are two sources of variation that can contribute to the between-groups

population variance estimate? 5. (a) What is the F ratio; (b) why is it usually about 1 when the null hypothesis is

true; and (c) why is it usually larger than 1 when the null hypothesis is false?

Statisticians have developed the mathematics of an F distribution and have pre- pared tables of F ratios. For any given situation, you merely look up in an F table how extreme an F ratio is needed to reject the null hypothesis at, say, the .05 level. (You learn to use the F table later in the chapter.)

For an example of an F ratio, let’s return to the attachment style study. The results of that study, for jealousy, were as follows: The between-groups population vari- ance estimate was 23.27, and the within-groups population variance estimate was .53. (You learn shortly how to figure these estimates on your own.) The ratio of the between-groups to the within-groups variance estimates (23.27�.53) came out to 43.91; that is, . This F ratio is considerably larger than 1. The F ratio needed to reject the null hypothesis at the .05 level in this study is only 3.01. Thus, the re- searchers confidently rejected the null hypothesis and concluded that the amount of jealousy is not the same for the three attachment styles. (Mean jealous ratings were 2.17 for secures, 2.57 for avoidants, and 2.88 for anxious-ambivalents.)

An Analogy Some students find an analogy helpful in understanding the analysis of variance. The analogy is to what engineers call the signal-to-noise ratio. For example, your ability to make out the words in a staticky cell phone conversation depends on the strength of the signal versus the amount of random noise. With the F ratio in the analysis of variance, the difference among the means of the samples is like the signal; it is the in- formation of interest. The variation within the samples is like the noise. When the variation among the samples is sufficiently great in comparison to the variation within the samples, you conclude that there is a significant effect.

F = 43.91

F distribution mathematically defined curve that is the comparison distribution used in an analysis of variance.

F table table of cutoff scores on the F distribution.

4.Two sources of variation that can contribute to the between-groups population variance estimate are (i) variation among the scores in each of the populations (that is, variation due to chance factors) and (ii) variation among the means of the populations (that is, variation due to a treatment effect).

5.(a)TheFratioistheratioofthebetween-groupspopulationvarianceestimate tothewithin-groupspopulationvarianceestimate.(b)Bothestimatesarebased entirelyonthesamesourceofvariation—thevariationamongthescoresineach ofthepopulations(thatis,duetochancefactors).(c)Thebetween-groups estimateisalsoinfluencedbythevariationamongthemeansofthepopulations (thatis,atreatmenteffect)whereasthewithin-groupsestimateisnot.Thus, whenthenullhypothesisisfalse(andthusthemeansofthepopulationsarenot thesame),thebetween-groupsestimatewillbebiggerthanthewithin-groups estimate.

IS B

N 0-558-46761-X

Introduction to the Analysis of Variance 319

Carrying Out an Analysis of Variance Now that we have considered the basic logic of the analysis of variance, we will go through an example to illustrate the details. (We use a fictional study to keep the num- bers simple.)

Suppose a social psychologist is studying the influence of knowledge of previ- ous criminal record on juries’ perceptions of the guilt or innocence of defendants. The researcher recruits 15 volunteers who have been selected for jury duty (but have not yet served at a trial). The researcher shows them a video of a four-hour trial in which a woman is accused of passing bad checks. Before viewing the tape, however, all of the research participants are given a “background sheet” with age, marital status, ed- ucation, and other such information about the accused woman. The sheet is the same for all 15 participants, with one difference. For five of the participants, the last sec- tion of the sheet says that the woman has been convicted several times before of pass- ing bad checks; we will call those participants the Criminal Record group. For five other participants, the last section of the sheet says the woman has a completely clean criminal record—the Clean Record group. For the remaining five participants, the sheet does not mention anything about criminal record one way or the other—the No Information group.

The participants are randomly assigned to the groups. After viewing the tape of the trial, all 15 participants make a rating on a 10-point scale, which runs from com- pletely sure she is innocent (1) to completely sure she is guilty (10). The results of this fictional study are shown in Table 9–3. As you can see, the means of the three groups are different (8, 4, and 5). Yet there is also quite a bit of variation within each of the three groups. Population variance estimates from the scores in each of these three groups are 4.5, 5.0, and 6.5.

You need to figure the following numbers to test the hypothesis that the three populations are different: (a) a population variance estimate based on the variation of the scores in each of the samples, (b) a population variance estimate based on the

Answers

1.Analysis of variance is used when you are comparing means of samples from more than two populations.

2.(a) The within-groups population variance estimate is based on the variation among the scores in each of the samples. (b) It is not affected. (c) Whether the null hypothesis is true has to do with whether the means of the populations dif- fer. Thus, the within-groups estimate is not affected by whether the null hy- pothesis is true because the variation withineach population (which is the basis for the variation in each sample) is not affected by whether the population means differ.

3.(a) The between-groups population variance estimate is based on the variation among the means of the samples. (b) It is larger when the null hypothesis is false. (c) Whether the null hypothesis is true has to do with whether the means of the populations differ. When the null hypothesis is false, the means of the populations differ. Thus, the between-groups estimate is bigger when the null hypothesis is false, because the variation among the means of the populations (which is one basis for the variation among the means of the samples) is greater when the population means differ.

IS B

N 0-

55 8-

46 76

1- X

320 Chapter 9

differences among the group means, and (c) the ratio of the two, the F ratio. (In addition, you need the significance cutoff F from an F table.)

Figuring the Within-Groups Estimate of the Population Variance You can estimate the population variance from any one group (that is, from any one sample) using the usual method of estimating a population variance from a sample. First, you figure the sum of the squared deviation scores. That is, you take the devi- ation of each score from its group’s mean, square that deviation score, and sum all the squared deviation scores. Second, you divide that sum of squared deviation scores by that group’s degrees of freedom. (The degrees of freedom for a group are the number of scores in the group minus 1.) For the example, as shown in Table 9–3, this gives an estimated population variance of 4.5 based on the Criminal Record group’s scores, an estimate of 5.0 based on the Clean Record group’s scores, and an estimate of 6.5 based on the No Information group’s scores.

Once again, in the analysis of variance, as with the t test, we assume that the pop- ulations have the same variance and that the estimates based on each sample’s scores are all estimating the same true population variance. The sample sizes are equal in this example; so the estimate for each group is based on an equal amount of information. Thus (unlike with the t test), you can pool these variance estimates by straight aver- aging. This gives an overall estimate of the population variance based on the varia- tion within groups of 5.33 (that is, the sum of 4.5, 5.0, and 6.5, which is 16, divided by 3, the number of groups).

To summarize, the two steps are:

●A Figure population variance estimates based on each group’s scores. ●B Average these variance estimates. The estimated population variance based on

the variation of the scores within each of the groups is the within-groups vari- ance estimate. This is symbolized as or . is short for mean squares within. The term mean squares is another name for the variance, because the variance is the mean of the squared deviations. ( or is also sometimes called the error variance and symbolized as or .)MSErrorS

2 Error

MSWithinS 2 Within

MSWithinMSWithinS 2 Within

or within-groups estimate of the population variance.

MSWithinS2Within

Table 9–3 Results of the Criminal Record Study (Fictional Data)

Criminal Record Group Clean Record Group No Information Group

Rating Deviation from Mean

Squared Deviation

from Mean Rating Deviation

from Mean

Squared Deviation

from Mean Rating Deviation

from Mean

Squared Deviation

from Mean

10 2 4 5 1 1 4 1

7 1 1 9 6 1 1

5 9 3 1 9 4 16

10 2 4 7 3 9 3 4

8 0 0 4 0 0 3 4

40 0 18 20 0 20 25 0 26

S 2 = 26>4 = 6.5.S 2 = 20>4 = 5.0.S 2 = 18>4 = 4.5. M = 25>5 = 5.M = 20>5 = 4.M = 40>5 = 8.

© :

-2 -2

-1-3 -3-1

-1

IS B

N 0-558-46761-X

Introduction to the Analysis of Variance 321

In terms of a formula,

(9–1)

In this formula, is the estimated population variance based on the scores in the first group (the group from Population 1), is the estimated population variance based on the scores in the second group, and is the estimated population vari- ance based on the scores in the last group. (The dots, or ellipsis, in the formula show that you are to fill in a population variance estimate for as many other groups as there are in the analysis.) is the number of groups.

Using this formula for our figuring, we get

Figuring the Between-Groups Estimate of the Population Variance Figuring the between-groups estimate of the population variance also involves two steps (though quite different ones from the within-groups estimate). First estimate, from the means of your samples, the variance of a distribution of means. Second, based on the variance of this distribution of means, figure the variance of the popu- lation of individuals. Here are the two steps in more detail:

●A Estimate the variance of the distribution of means: Add up the sample means’ squared deviations from the overall mean (the mean of all the scores) and divide this by the number of means minus 1.

You can think of the means of your samples as taken from a distribution of means. Follow the standard procedure of using the scores in a sample to estimate the variance of the population from which these scores are taken. In this situation, you think of the means of your samples as the scores and the distribution of means as the population from which these scores come. What this all boils down to are the following procedures. You begin by figuring the sum of squared deviations. (You find the mean of your sample means, figure the deviation of each sample mean from this mean of means, square each of these deviations, and then sum these squared deviations.) Then, divide this sum of squared deviations by the de- grees of freedom, which is the number of means minus 1. In terms of a formula (when sample sizes are all equal),

(9–2)

In this formula, is the estimated variance of the distribution of means (estimated based on the means of the samples in your study). M is the mean of each of your samples. GM is the grand mean, the overall mean of all your scores, which is also the mean of your means. dfBetween is the degrees of freedom in the between-groups estimate, the number of groups minus 1. Stated as a formula,

(9–3)

In the criminal record example, the three means are 8, 4, and 5. The figuring of is shown in Table 9–4.S2M

dfBetween = NGroups – 1

S2M

S2M = a (M – GM)2

dfBetween

S2Within = S21 + S22 + Á + S2Last

NGroups =

4.5 + 5.0 + 6.5 3

= 16

3 = 5.33

NGroups

S2Last

S22

S21

S2Within or MSWithin = S21 + S22 + Á + S2Last

NGroups

The within-groups population variance estimate is the sum of the population variance es- timates based on each sam- ple, divided by the number of groups.

The estimated variance of the distribution of means is the sum of each sample mean’s squared deviation from the grand mean, divided by the degrees of freedom for the between-groups population variance estimate.

grand mean (GM) overall mean of all the scores, regardless of what group they are in; when group sizes are equal, mean of the group means.

The degrees of freedom for the between-groups popula- tion variance estimate is the number of groups minus 1.

IS B

N 0-

55 8-

46 76

1- X

322 Chapter 9

●B Figure the estimated variance of the population of individual scores: Multiply the variance of the distribution of means by the number of scores in each group.

What we just figured in Step ●A, from a sample of a few means, is the estimated variance of a distribution of means. From this we want to estimate the variance of the population (the distribution of individuals) on which the distribution of means is based. We saw in Chapter 5 that the variance of a distribution of means is smaller than the variance of the population (the distribution of individuals) that it is based on. This is because means are less likely to be extreme than are individual scores (be- cause any one sample is unlikely to include several scores that are extreme in the same direction). Specifically, you learned in Chapter 5 that the variance of a distrib- ution of means is the variance of the distribution of individual scores divided by the number of scores in each sample.

Now, however, we are going to reverse what we did in Chapter 5. In Chapter 5 you figured the variance of the distribution of means by dividing the variance of the distri- bution of individuals by the sample size. Now you are going to figure the variance of the distribution of individuals by multiplying the variance of the distribution of means by the sample size (see Table 9–5). That is, to come up with the variance of the popu- lation of individuals, you multiply your estimate of the variance of the distribution of means by the sample size in each of the groups. The result of all this is the between- groups variance estimate. Stated as a formula (for when sample sizes are equal),

(9–4)

In this formula, or is the estimate of the population variance based on the variation between the means (the between-groups population variance estimate). n is the number of participants in each sample.

Let’s return to our example in which there were five participants in each sample and an estimated variance of the distribution of means of 4.34. In this example,

MSBetweenS 2 Between

S2Between or MSBetween = S2M(n)

Table 9–4 Estimated Variance of the Distribution of Means Based on Means of the Three Experimental Groups in the Criminal Record Study (Fictional Data)

Sample Means Deviation from

Grand Mean Squared Deviation from Grand Mean

(M )

4 2.79

8 2.33 5.43

5 .45

17 8.67

Source: Hazan, C., & Shaver, P. (1987). Romantic love conceptualized as an attachment process. Journal of Personality and Social Psychology, 52, 515. Published by the American Psychological Association. Reprinted with permission.

GM = (©M )>NGroups = 17>3 = 5.67; S2M = ©(M – GM )2>dfBetween = 8.67>2 = 4.34. -0.01©

– .67

-1.67 (M – GM )2(M – GM )

Table 9–5 Comparison of Figuring the Variance of a Distribution of Means from the Variance of a Distribution of Individuals, and the Reverse

• From distribution of individuals to distribution of means: • From distribution of means to distribution of individuals: S2 = (S2M )(n )

S2M = S2>n

The between-groups popula- tion variance estimate (or mean squares between) is the estimated variance of the dis- tribution of means multiplied by the number of scores in each group.

or between-groups estimate of the population variance.

MSBetweenS2Between

IS B

N 0-558-46761-X

Introduction to the Analysis of Variance 323

T I P F O R S U C C E S S A very common mistake when fig- uring the F ratio is to turn the for- mula upside down. Just remember it is as simple as Black and White, so it is Between divided by Within.

multiplying 4.34 by 5 gives a between-groups population variance estimate of 21.70. In terms of the formula,

Figuring the F Ratio The F ratio is the ratio of the between-groups to the within-groups estimate of the population variance. Stated as a formula,

(9–5)

In the example, the ratio of between to within is 21.70 to 5.33. Carrying out the division gives an F ratio of 4.07. In terms of the formula,

The F Distribution You are not quite done. You still need to find the cutoff for the F ratio that is large enough to reject the null hypothesis. This requires a distribution of F ratios that you can use to figure out what is an extreme F ratio.

In practice, you simply look up the needed cutoff on a table (or read the exact sig- nificance from the computer output). To understand where that number on the table comes from, you need to understand the F distribution. The easiest way to understand this distribution is to think about how you would go about making one.

Start with three identical populations. Next, randomly select five scores from each. Then, on the basis of these three samples (of five scores each), figure the F ratio. (That is, use these scores to make a within-groups estimate and a between-groups es- timate, then divide the between estimate by the within estimate.) Let’s say that you do this and the F ratio you come up with is 1.36. Now you select three new random samples of five scores each and figure the F ratio using these three samples. Perhaps you get an F of .93. If you do this whole process many, many times, you will even- tually get a lot of F ratios. The distribution of all possible F ratios figured in this way (from random samples from identical populations) is called the F distribution. Figure 9–3 shows an example of an F distribution. (There are many different F dis- tributions, and each has a slightly different shape. The exact shape depends on how many samples you take each time and how many scores are in each sample. The general shape is like that shown in the figure.)

No one actually goes about making F distributions in this way. It is a mathemat- ical distribution whose exact characteristics can be found from a formula. Statisti- cians can also prove that, if you had the patience to follow this procedure of taking random samples and figuring the F ratio of each for a very long time, you would get the same result.

As you can see in Figure 9–3, the F distribution is not symmetrical but has a long tail on the right. The reason for the positive skew is that an F distribution is a distri- bution of ratios of variances. Variances are always positive numbers. (A variance is an average of squared deviations, and anything squared is a positive number.) A ratio of a positive number to a positive number can never be less than 0. Yet there is nothing to stop a ratio from being a very high number. Thus, the F