Stat Trek

Teach yourself statistics

Stat Trek

Teach yourself statistics

Comparison of Treatment Means

A comparison (aka, a contrast) is a weighted sum of factor level means. Researchers use comparisons to address additional questions that are not answered by a standard, omnibus analysis of variance.

A standard, omnibus analysis of variance answers one question: Do mean scores differ significantly among treatment groups? A significant F ratio indicates that the mean score in at least one treatment group differs significantly from the mean score in at least one other treatment group; but a significant F ratio does not reveal which mean scores are significantly different.

To understand which mean scores are significantly different, researchers conduct follow-up analyses in which they look at comparisons.

Attributes of a Comparison

A comparison is a weighted sum of mean scores. Mathematically, a comparison can be expressed as:

k L = Σ cj Xj j=1

In addition, all comparisons are subject to the following constraint:

k Σ j=1
  nj cj = 0  

In the equations above, L is the value of the comparison, cj is a coefficient (weight) for treatment j, Xj is the mean score for treatment j, nj is the number of subjects assigned to treatment j , and k is the number of treatment groups.

Alternative Definition of a Comparison

In some textbooks, you may see a comparison defined as a weighted sum of total scores (Tj), rather than as a weighted sum of mean scores (Xj).

k L = Σ cj Tj j=1

On this website, when we refer to a comparison - in text or in equations - we will be referring to a weighted sum of mean scores. Others may make a different choice. So, if you read about comparisons in other places, be aware of which definition is being used.

With balanced designs (i.e., designs in which sample size is constant across treatment groups), the necessary condition for a comparison reduces to:

Σ cj = 0

And, for convenience, we will assign one additional constraint to the comparisons that we work with in this tutorial:

Σ | cj | = 2

In the equation above, the symbol | cj | refers to the absolute value of cj .

So, here are the key things you should know about a comparison from a balanced experimental design:

  • A comparison is a weighted sum of factor level means.
  • The sum of the coefficent raw scores (weights) is equal to zero.
  • The sum of the coefficient absolute values is equal to two.

How to Use Comparisons

Researchers use comparisons to identify particular treatment means for analysis. To understand how they do this, it helps to look at an example. So, consider the following completely randomized, one-factor experiment.

Treatment
Group 1 Group 2 Group 3
210 210 180
240 240 210
270 240 210
270 270 210
300 270 240

We conducted a standard analysis of variance for this experiment in a previous lesson (see One-Way Analysis of Variance: Example). That analysis resulted in a significant F ratio (p = .04).

Additional Research Questions

The significant, omnibus F test (p = .04) tells us that the mean score in at least one treatment group is different from the mean score in at least one other treatment group. But it does not say anything about how the mean scores differ. For example, here are some additional research questions that are not addressed by an omnibus F test:

  • Is the mean score in Group 1 significantly different from the mean score in Group 2?
  • Is the mean score in Group 1 significantly different from the mean score in Group 3?
  • Is the mean score in Group 2 significantly different from the mean score in Group 3?
  • Is the mean score in Group 1 significantly different from the average of mean scores in Groups 2 and 3?
  • Is the mean score in Group 2 significantly different from the average of mean scores in Groups 1 and 3?
  • Is the mean score in Group 3 significantly different from the average of mean scores in Groups 1 and 2?

Comparisons and Research Questions

Each of the research questions listed above can be represented mathematically by a comparison (a weighted sum of factor level means) in the following form:

Li = Σ cj Xj

Li = c1X1 + c2X2 + c3X3

To illustrate the process, let's define a comparison for each research question listed above.

  • Is the mean score in Group 1 significantly different from the mean score in Group 2?

    A comparison (L1) to represent this research question is obtained by setting c1 = 1, c2 = -1, and c3 = 0, as shown below:

    L1 = 1 * X1 - 1 * X2 + 0 * X3

    L1 = X1 - X2

  • Is the mean score in Group 1 significantly different from the mean score in Group 3?

    A comparison (L2) to represent this research question is obtained by setting c1 = 1, c2 = 0, and c3 = -1, as shown below:

    L2 = 1 * X1 + 0 * X2 - 1 * X3

    L2 = X1 - X3

  • Is the mean score in Group 2 significantly different from the mean score in Group 3?

    A comparison (L3) to represent this research question is obtained by setting c1 = 0, c2 = 1, and c3 = -1, as shown below:

    L3 = 0 * X1 + 1 * X2 - 1 * X3

    L3 = X2 - X3

  • Is the mean score in Group 1 significantly different from the average of mean scores in Groups 2 and 3?

    A comparison (L4) to represent this research question is obtained by setting c1 = 1, c2 = -0.5, and c3 = -0.5, as shown below:

    L4 = 1 * X1 - 0.5 * X2 - 0.5 * X3

    L4 = X1 - (X2 + X3) / 2

  • Is the mean score in Group 2 significantly different from the average of mean scores in Groups 1 and 3?

    A comparison (L5) to represent this research question is obtained by setting c1 = -0.5, c2 = 1, and c3 = -0.5, as shown below:

    L5 = 1 * X2 - 0.5 * X1 - 0.5 * X3

    L5 = X2 - (X1 + X3) / 2

  • Is the mean score in Group 3 significantly different from the average of mean scores in Groups 1 and 2?

    A comparison (L6) to represent this research question is obtained by setting c1 = -0.5, c2 = -0.5, and c3 = 1, as shown below:

    L6 = 1 * X3 - 0.5 * X1 - 0.5 * X2

    L6 = X3 - (X1 + X2) / 2

Notice that each of the comparisons satisfy the two constraints that we mentioned earlier for a balanced experimental design:

Σ cj = 0   and   Σ | cj | = 2

For each comparison, the sum of the coefficent raw scores is equal to zero; and the sum of the coefficient absolute values is equal to two.

Comparison Sum of Squares

With a balanced design, the sum of squares for a given comparison ( Li ) can be computed from the following formula:

SSi = n * Li2 / Σ c2ij

where SSi is the sum of squares for comparison Li , Li is the value of the comparison, n is the sample size in each group, and cij is the coefficient (weight) for level j in the formula for comparison Li.

When the design is unbalanced and Σ njcj = 0, the sum of squares for a given comparison ( Li ) can be computed from the following formula:

SSi = ( Σ njcij Xj )2 / Σ njc2ij

where SSi is the sum of squares for comparison Li , nj is the sample size in Group j , cij is the coefficient (weight) for level j in the formula for comparison Li, and Xj is the mean score for Group j .

Note: For an example that uses this formula to compute the sum of squares for a comparison, see Problem 2.

Planned Comparisons vs. Post Hoc Comparisons

Comparisons fall into one of two groups, depending on their origin in the research plan.

  • Planned comparisons. Planned comparisons (aka, a priori comparisons) test hypotheses that were posed upfront in the analysis plan. These are the hypotheses that the experiment was designed to test.
  • Post hoc comparisons. Post hoc comparisons (aka, a posteriori comparisons) test hypotheses that did not appear in the original analysis plan. These are hypotheses posed after data collection to shed additional light on relationships between mean scores.

Why Do We Care?

Why do we care about comparisons? With the right tweaks to a standard analysis of variance, comparisons can be tested for statistical significance. With comparisons, we can perform follow-up analyses to address research questions that are not addressed by a standard, omnibus analysis of variance.

To perform these follow-up analyses, you need to do all of the things we've covered in this lesson:

  • Define a comparison that represents the research question of interest.
  • Compute the value of that comparison.
  • Calculate a sum of squares for that comparison.
  • Discriminate between planned comparisons and post hoc comparisons.

In subsequent lessons, we will fill in the missing details that will allow you to supplement a standard analysis of variance with relevant follow-up tests.

Test Your Understanding

Problem 1

You're running single-factor experiment with four treatment groups. Group 1 is a control group. Subjects in Group 1 do not receive any vitamins. Subjects in Groups 2, 3, and 4 receive vitamin A, vitamin B, or vitamin C, respectively.

Group 1 Group 2 Group 3 Group 4
Control Vitamin A Vitamin B Vitamin C

You want to know whether the mean score in the control group (X1) is significantly different from average of mean scores (X2, X3, and X4) in the other three groups. (Assume that sample size is the same in each group.)

Which of the following comparisons describes the research question you want to test?

(A) L1 = X1 - X2
(B) L2 = X1 - X2 - X3 - X4
(C) L3 = X1 - (X2 + X3 + X4) / 3
(D) L4 = X1 - (X2 + X3 + X4) / 4
(E) None of the above.

Solution

The correct answer is (C). The comparison L3 is expressed in the correct form:

L = Σ cj Xj

where c1 = 1, c2 = -1/3, c3 = -1/3, and c4 = -1/3.

Note also that the coefficients of comparison L3 satisfy the constraints that we described earlier:

Σ cj = 0   and   Σ | cj | = 2

Comparison L3 measures the difference between the mean score in the control group (X1) and the average of the other three treatment means - (X2 + X3 + X4) / 3. If L3 were close to zero, we would conclude that the mean of the control group was not very different from the mean of the other three groups combined.

Comparison L1 compares the mean score in Group 1 to the mean score in Group 2, but it ignores the mean scores in Group 3 and Group 4. So L1 does not address the research question posed by the researcher.

Comparisons L2 and L4 also do not address the research question of interest. And comparisons L2 and L4 do not satisfy the constraints that we described earlier:

Σ cj = 0   and   Σ | cj | = 2

So comparisons L2 and L4 cannot be correct answers to this problem.

Problem 2

You're running single-factor experiment with three treatment groups. Each group has 10 subjects. Mean scores for each group appear below:

Group 1 Group 2 Group 3
50 60 70

You want to test the hypothesis that the mean score in Group 1 is not significantly different from the mean score in Group 3. Here is the comparison relevant to that hypothesis:

L1 = 1 * X1 + 0 * X2 - 1 * X3

L1 = X1 - X3

L1 = 50 - 70 = -20

What is the sum of squares for this comparison?

(A) 1000
(B) 2000
(C) 3000
(D) 4000
(E) None of the above

Solution

The correct answer is (B). Since we are dealing with a balanced design, the sum of squares for comparison L1 can be computed from the following formula:

SSi = n * Li2 / Σ c2ij

SS1 = 10 * (-20)2 / [ (1)2 + (0)2 + (-1)2 ]

SS1 = 4000 / 2 = 2000

where SSi is the sum of squares for comparison Li , Li is the value of the comparison, n is the sample size in each group, and cij is the coefficient (weight) for level j in the formula for comparison Li.