Stat Trek

Teach yourself statistics

Stat Trek

Teach yourself statistics

ANOVA With Full Factorial Experiments

This lesson explains how to use analysis of variance (ANOVA) with balanced, completely randomized, full factorial experiments. The discussion covers general issues related to design, analysis, and interpretation with fixed factors and with random factors.

Future lessons expand on this discussion, using sample problems to demonstrate the analysis under the following scenarios:

Design Considerations

Since this lesson is all about implementing analysis of variance with a balanced, completely randomized, full factorial experiment, we begin by answering four relevant questions:

  • What is a full factorial experiment?
  • What is a completely randomized design?
  • What are the data requirements for analysis of variance with a completely randomized, full factorial design?
  • What is a balanced design?

What is a Full Factorial Experiment?

A factorial experiment allows researchers to study the joint effect of two or more factors on a dependent variable.

With a full factorial design, the experiment includes a treatment group for every combination of factor levels. Therefore, the number of treatment groups is the product of factor levels. For example, consider the full factorial design shown below:

  C1 C2 C3 C4
A1 B1 Grp 1 Grp 2 Grp 3 Grp 4
B2 Grp 5 Grp 6 Grp 7 Grp 8
B3 Grp 9 Grp 10 Grp 11 Grp 12
A2 B1 Grp 13 Grp 14 Grp 15 Grp 16
B2 Grp 17 Grp 18 Grp 19 Grp 20
B3 Grp 21 Grp 22 Grp 23 Grp 24
  A1 A2
B1 B2 B3 B1 B2 B3
C1 Group 1 Group 2 Group 3 Group 4 Group 5 Group 6
C2 Group 7 Group 8 Group 9 Group 10 Group 11 Group 12
C3 Group 13 Group 14 Group 15 Group 16 Group 17 Group 18
C4 Group 19 Group 20 Group 21 Group 22 Group 23 Group 24

Factor A has two levels, factor B has three levels, and factor C has four levels. Therefore, the full factorial design has 2 x 3 x 4 = 24 treatment groups.

Full factorial designs can be characterized by the number of treatment levels associated with each factor, or by the number of factors in the design. Thus, the design above could be described as a 2 x 3 x 4 design (number of treatment levels) or as a three-factor design (number of factors).

Note: Another type of factorial experiment is a fractional factorial. Unlike full factorial experiments, which include a treatment group for every combination of factor levels, fractional factorial experiments include only a subset of possible treatment groups. Our focus in this lesson is on full factorial experiments, rather than fractional factorial experiments.

Completely Randomized Design

With a full factorial experiment, a completely randomized design is distinguished by the following attributes:

  • The design has two or more factors (i.e., two or more independent variables), each with two or more levels.
  • Treatment groups are defined by a unique combination of non-overlapping factor levels.
  • The number of treatment groups is the product of factor levels.
  • Experimental units are randomly selected from a known population.
  • Each experimental unit is randomly assigned to one, and only one, treatment group.
  • Each experimental unit provides one dependent variable score.

Data Requirements

Analysis of variance requires that the dependent variable be measured on an interval scale or a ratio scale. In addition, analysis of variance with a full factorial experiment makes three assumptions about dependent variable scores:

  • Independence. The dependent variable score for each experimental unit is independent of the score for any other unit.
  • Normality. In the population, dependent variable scores are normally distributed within treatment groups.
  • Equality of variance. In the population, the variance of dependent variable scores in each treatment group is equal. (Equality of variance is also known as homogeneity of variance or homoscedasticity.)

The assumption of independence is the most important assumption. When that assumption is violated, the resulting statistical tests can be misleading. This assumption is tenable when (a) experimental units are randomly sampled from the population and (b) sampled unitsare randomly assigned to treatments.

With respect to the other two assumptions, analysis of variance is more forgiving. Violations of normality are less problematic when the sample size is large. And violations of the equal variance assumption are less problematic when the sample size within groups is equal.

Before conducting an analysis of variance with data from a full factorial experiment, it is best practice to check for violations of normality and homogeneity assumptions. For further information, see:

Balanced versus Unbalanced Design

A balanced design has an equal number of observations in all treatment groups. In contrast, an unbalanced design has an unequal number of observations in some treatment groups.

Balance is not required with one-way analysis of variance, but it is helpful with full-factorial designs because:

  • Balanced factorial designs are less vulnerable to violations of the equal variance assumption.
  • Balanced factorial designs have more statistical power.
  • Unbalanced factorial designs can produce confounded factors, making it hard to interpret results.
  • Unbalanced designs use special weights for data analysis, which complicates the analysis.

Note: Our focus in this lesson is on balanced designs.

Analytical Logic

To implement analysis of variance with a balanced, completely randomized, full factorial experiment, a researcher takes the following steps:

  • Specify a mathematical model to describe how main effects and interaction effects influence the dependent variable.
  • Write statistical hypotheses to be tested by experimental data.
  • Specify a significance level for a hypothesis test.
  • Compute the grand mean and the mean scores for each treatment group.
  • Compute sums of squares for each effect in the model.
  • Find the degrees of freedom associated with each effect in the model.
  • Based on sums of squares and degrees of freedom, compute mean squares for each effect in the model.
  • Find the expected value of the mean squares for each effect in the model.
  • Compute a test statistic for each effect, based on observed mean squares and their expected values.
  • Find the P value for each test statistic.
  • Accept or reject the null hypothesis for each effect, based on the P value and the significance level.
  • Assess the magnitude of effect, based on sums of squares.

If you are familiar with one-way analysis of variance (see One-Way Analysis of Variance), you might notice that the analytical logic for a completely-randomized, single-factor experiment is very similar to the logic for a completely randomized, full factorial experiment. Here are the main differences:

  • Formulas for mean scores and sums of squares differ, depending on the number of factors in the experiment.
  • Expected mean squares differ, depending on whether the experiment tests fixed effects and/or random effects.

Below, we'll explain how to implement analysis of variance for fixed-effects models, random-effects models, and mixed models with a balanced, two-factor, completely randomized, full-factorial experiment.

Mathematical Model

For every experimental design, there is a mathematical model that accounts for all of the independent and extraneous variables that affect the dependent variable.

Fixed Effects

For example, here is the fixed-effects mathematical model for a two-factor, completely randomized, full-factorial experiment:

X i j m = μ + α i + β j + αβ i j + ε m ( ij )

where X i j m is the dependent variable score for subject m in treatment group ij, μ is the population mean, α i is the main effect of Factor A at level i; β j is the main effect of Factor B at level j; αβ i j is the interaction effect of Factor A at level i and Factor B at level j; and ε m ( ij ) is the effect of all other extraneous variables on subject m in treatment group ij.

For this model, it is assumed that ε m ( ij ) is normally and independently distributed with a mean of zero and a variance of σε2. The mean ( μ ) is constant.

Note: The parentheses in ε m ( ij ) indicate that subjects are nested under treatment groups. When a subject is assigned to only one treatment group, we say that the subject is nested under a treatment.

Random Effects

The random-effects mathematical model for a completely randomized full factorial experiment is similar to the fixed-effects mathematical model. It can also be expressed as:

X i j m = μ + α i + β j + αβ i j + ε m ( ij )

Like the fixed-effects mathematical model, the random-effects model also assumes that (1) ε m ( ij ) is normally and independently distributed with a mean of zero and a variance of σε2 and (2) the mean ( μ ) is constant.

Here's the difference between the two mathematical models. With a fixed-effects model, the experimenter includes all treatment levels of interest in the experiment. With a random-effects model, the experimenter includes a random sample of treatment levels in the experiment. Therefore, in the random-effects mathematical model, the following is true:

  • The main effect ( α i ) is a random variable with a mean of zero and a variance of σ2α.
  • The main effect ( β j ) is a random variable with a mean of zero and a variance of σ2β.
  • The interaction effect ( αβ ij ) is a random variable with a mean of zero and a variance of σ2αβ.

All three effects are assumed to be normally and independently distributed (NID).

Statistical Hypotheses

With a full factorial experiment, it is possible to test all main effects and all interaction effects. For example, here are the null hypotheses (H0) and alternative hypotheses (H1) for each effect in a two-factor full factorial experiment.

Fixed Effects

For fixed-effects models, it is common practice to write statistical hypotheses in terms of treatment effects:

H0: α i = 0 for all i H0: β j = 0 for all j H0: αβ ij = 0 for all ij
H1: α i ≠ 0 for some i H1: β j ≠ 0 for some j H1: αβ ij ≠ 0 for some ij

Random Effects

For random-effects models, it is common practice to write statistical hypotheses in terms of the variance of treatment levels included in the experiment:

H0: σ2α = 0 H0: σ2β = 0 H0: σ2αβ = 0
H1: σ2α ≠ 0 H1: σ2β ≠ 0 H1: σ2αβ ≠ 0

Significance Level

The significance level (also known as alpha or α) is the probability of rejecting the null hypothesis when it is actually true. The significance level for an experiment is specified by the experimenter, before data collection begins. Experimenters often choose significance levels of 0.05 or 0.01.

A significance level of 0.05 means that there is a 5% chance of rejecting the null hypothesis when it is true. A significance level of 0.01 means that there is a 1% chance of rejecting the null hypothesis when it is true. The lower the significance level, the more persuasive the evidence needs to be before an experimenter can reject the null hypothesis.

Mean Scores

Analysis of variance for a full factorial experiment begins by computing a grand mean, marginal means, and group means. Here are formulas for computing the various means for a balanced, two-factor, full factorial experiment:

  • Grand mean. The grand mean (X) is the mean of all observations, computed as follows:
    N =
    pΣi=1
    qΣj=1
    n = pqn
    X = ( 1 / N )
    pΣi=1
    qΣj=1
    nΣm=1
    ( X i j m )
  • Marginal means for Factor A. The mean for level i of Factor A is computed as follows:
    X i = ( 1 / q )
    qΣj=1
    nΣm=1
    ( X i j m )
  • Marginal means for Factor B. The mean for level j of Factor B is computed as follows:
    X j = ( 1 / p )
    pΣi=1
    nΣm=1
    ( X i j m )
  • Group means. The mean of all observations in group i j ( X i j ) is computed as follows:
    X i j = ( 1 / n )
    nΣm=1
    ( X i j m )

In the equations above, N is the total sample size across all treatment groups; n is the sample size in a single treatment group, p is the number of levels of Factor A, and q is the number of levels of Factor B.

Sums of Squares

A sum of squares is the sum of squared deviations from a mean score. Two-way analysis of variance makes use of five sums of squares:

  • Factor A sum of squares. The sum of squares for Factor A (SSA) measures variation of the marginal means of Factor A ( X i ) around the grand mean ( X ). It can be computed from the following formula:
    SSA = nq
    pΣi=1
    X i - X )2
  • Factor B sum of squares. The sum of squares for Factor B (SSB) measures variation of the marginal means of Factor B ( X j ) around the grand mean ( X ). It can be computed from the following formula:
    SSB = np
    qΣj=1
    X j - X )2
  • Interaction sum of squares. The sum of squares for the interaction between Factor A and Factor B (SSAB) can be computed from the following formula:
    SSAB = n
    pΣi=1
    qΣj=1
    X i j  - X  i  - X  j  + X )2
  • Within-groups sum of squares. The within-groups sum of squares (SSW) measures variation of all scores ( X i j m ) around their respective group means ( X i j ). It can be computed from the following formula:
    SSW =
    pΣi=1
    qΣj=1
    nΣm=1
    ( X i j m - X i j )2
    Note: The within-groups sum of squares is also known as the error sum of squares (SSE).
  • Total sum of squares. The total sum of squares (SST) measures variation of all scores ( X i j m ) around the grand mean ( X ). It can be computed from the following formula:
    SST =
    pΣi=1
    qΣj=1
    nΣm=1
    ( X i j m - X )2

In the formulas above, n is the sample size in each treatment group, p is the number of levels of Factor A, and q is the number of levels of Factor B.

It turns out that the total sum of squares is equal to the sum of the component sums of squares, as shown below:

SST = SSA + SSB + SSAB + SSW

As you'll see later on, this relationship will allow us to assess the relative magnitude of any effect (Factor A, Factor B, or the AB interaction) on the dependent variable.

Degrees of Freedom

The term degrees of freedom (df) refers to the number of independent sample points used to compute a statistic minus the number of parameters estimated from the sample points.

The degrees of freedom used to compute the various sums of squares for a balanced, two-way factorial experiment are shown in the table below:

Sum of squares Degrees of freedom
Factor A p - 1
Factor B q - 1
AB interaction ( p - 1 )( q - 1)
Within groups pq( n - 1 )
Total npq - 1

Notice that there is an additive relationship between the various sums of squares. The degrees of freedom for total sum of squares (dfTOT) is equal to the degrees of freedom for the Factor A sum of squares (dfA) plus the degrees of freedom for the Factor B sum of squares (dfB) plus the degrees of freedom for the AB interaction sum of squares (dfAB) plus the degrees of freedom for within-groups sum of squares (dfWG). That is,

dfTOT = dfA + dfB + dfAB + dfWG

Mean Squares

A mean square is an estimate of population variance. It is computed by dividing a sum of squares (SS) by its corresponding degrees of freedom (df), as shown below:

MS = SS / df

To conduct analysis of variance with a two-factor, full factorial experiment, we are interested in four mean squares:

  • Factor A mean square. The Factor A mean square ( MSA ) measures variation due to the main effect of Factor A. It can be computed as follows:

    MSA = SSA / dfA

  • Factor B mean square. The Factor B mean square ( MSB ) measures variation due to the main effect of Factor B. It can be computed as follows:

    MSB = SSB / dfB

  • Interaction mean square. The mean square for the AB interaction measures variation due to the AB interaction effect. It can be computed as follows:

    MSAB = SSAB / dfAB

  • Within groups mean square. The within-groups mean square ( MSWG ) measures variation due to differences among experimental units within the same treatment group. It can be computed as follows:

    MSWG = SSW / dfWG

Expected Value

The expected value of a mean square is the average value of the mean square over a large number of experiments.

Statisticians have derived formulas for the expected value of mean squares for balanced, two-factor, full factorial experiments. The expected values differ, depending on whether the experiment uses all fixed factors, all random factors, or a mix of fixed and random factors.

Fixed-Effects Model

A fixed-effects model describes an experiment in which all factors are fixed factors. The table below shows the expected value of mean squares for a balanced, two-factor, full factorial experiment when both factors are fixed:

Mean square Expected value
MSA σ2WG + nqσ2A
MSB σ2WG + npσ2B
MSAB σ2WG + nσ2AB
MSWG σ2WG

In the table above, n is the sample size in each treatment group, p is the number of levels for Factor A, q is the number of levels for Factor B, σ2A is the variance of main effects due to Factor A, σ2B is the variance of main effects due to Factor B, σ2AB is the variance due to interaction effects, and σ2WG is the variance due to extraneous variables (also known as variance due to experimental error).

Random-Effects Model

A random-effects model describes an experiment in which all factors are random factors. The table below shows the expected value of mean squares for a balanced, two-factor, full factorial experiment when both factors are random:

Mean square Expected value
MSA σ2WG + nσ2AB + nqσ2A
MSB σ2WG + nσ2AB + npσ2B
MSAB σ2WG + nσ2AB
MSWG σ2WG

Mixed Model

A mixed model describes an experiment in which at least one factor is a fixed factor, and at least one factor is a random factor. The table below shows the expected value of mean squares for a balanced, two-factor, full factorial experiment, when Factor A is a fixed factor and Factor B is a random factor:

Mean square Expected value
MSA σ2WG + nσ2AB + nqσ2A
MSB σ2WG + npσ2B
MSAB σ2WG + nσ2AB
MSWG σ2WG

Note: The expected values shown in the tables are approximations. For all practical purposes, the values for the fixed-effects model will always be valid for computing test statistics (see below). The values for the random-effects model and the mixed model will be valid when random-effect levels in the experiment represent a small fraction of levels in the population.

Test Statistics

Suppose we want to test the significance of a main effect or the interaction effect in a two-factor, full factorial experiment. We can use the mean squares to define a test statistic F as follows:

F(v1, v2) = MSEFFECT 1 / MSEFFECT 2

where MSEFFECT 1 is the mean square for the effect we want to test; MSEFFECT 2 is an appropriate mean square, based on the expected value of mean squares; v1 is the degrees of freedom for MSEFFECT 1 ; and v2 is the degrees of freedom for MSEFFECT 2.

How do you choose an appropriate mean square for the denominator in an F ratio? The expected value of the denominator of the F ratio should be identical to the expected value of the numerator, except for one thing: The numerator should have an extra term that includes the variance of the effect being tested (σ2EFFECT).

Fixed-Effects Model

The table below shows how to construct F ratios when an experiment uses a fixed-effects model.

Table 1. Fixed-Effects Model

Effect Mean square:
Expected value
F ratio
A σ2WG + nqσ2A
MSA

MSWG
B σ2WG + nqσ2B
MSB

MSWG
AB σ2WG + nσ2AB
MSAB

MSWG
Error σ2WG  

Random-Effects Model

The table below shows how to construct F ratios when an experiment uses a Random-effects model.

Table 2. Random-Effects Model

Effect Mean square:
Expected value
F ratio
A σ2WG + nσ2AB + nqσ2A
MSA

MSAB
B σ2WG + nσ2AB + npσ2B
MSB

MSAB
AB σ2WG + nσ2AB
MSAB

MSWG
Error σ2WG  

Mixed Model

The table below shows how to construct F ratios when an experiment uses a mixed model. Here, Factor A is a fixed effect, and Factor B is a random effect.

Table 3. Mixed Model

Effect Mean square:
Expected value
F ratio
A
(fixed)
σ2WG + nσ2AB + nqσ2A
MSA

MSAB
B
(random)
σ2WG + npσ2B
MSB

MSWG
AB σ2WG + nσ2AB
MSAB

MSWG
Error σ2WG  

How to Interpret F Ratios

For each F ratio in the tables above, notice that numerator should equal the denominator when the variation due to the source effect ( σ2 SOURCE ) is zero (i.e., when the source does not affect the dependent variable). And the numerator should be bigger than the denominator when the variation due to the source effect is not zero (i.e., when the source does affect the dependent variable).

Defined in this way, each F ratio is a convenient measure that we can use to test the null hypothesis about the effect of a source (Factor A, Factor B, or the AB interaction) on the dependent variable. Here's how to conduct the test:

  • When the F ratio is close to one, the numerator of the F ratio is approximately equal to the denominator. This indicates that the source did not affect the dependent variable, so we cannot reject the null hypothesis.
  • When the F ratio is significantly greater than one, the numerator is bigger than the denominator. This indicates that the source did affect the dependent variable, so we must reject the null hypothesis.

What does it mean for the F ratio to be significantly greater than one? To answer that question, we need to talk about the P-value.

P-Value

In an experiment, a P-value is the probability of obtaining a result more extreme than the observed experimental outcome, assuming the null hypothesis is true.

With analysis of variance for a full factorial experiment, the F ratios are the observed experimental outcomes that we are interested in. So, the P-value would be the probability that an F ratio would be more extreme (i.e., bigger) than the actual F ratio computed from experimental data.

How does an experimenter attach a probability to an observed F ratio? Luckily, the F ratio is a random variable that has an F distribution. The degrees of freedom (v1 and v2) for the F ratio are the degrees of freedom associated with the effects used to compute the F ratio.

For example, consider the F ratio for Factor A when Factor A is a fixed effect. That F ratio (FA) is computed from the following formula:

FA = F(v1, v2) = MSA / MSWG

MSA (the numerator in the formula) has degrees of freedom equal to df; so for F, v1 is equal to df. Similarly, MSWG (the denominator in the formula) has degrees of freedom equal to dfWG ; so for F, v2 is equal to dfWG . Knowing the F ratio and its degrees of freedom, we can use an F table or an online calculator to find the probability that an F ratio will be bigger than the actual F ratio observed in the experiment.

F Distribution Calculator

To find the P-value associated with an F ratio, use Stat Trek's free F distribution calculator. You can access the calculator by clicking a link in the table of contents (at the top of this web page in the left column). find the calculator in the Appendix section of the table of contents, which can be accessed by tapping the "Analysis of Variance: Table of Contents" button at the top of the page. Or you can click tap the button below.

F Distribution Calculator

For examples that show how to find the P-value for an F ratio, see Problem 1 or Problem 2 at the end of this lesson.

Hypothesis Test

Recall that the experimenter specified a significance level early on - before the first data point was collected. Once you know the significance level and the P-values, the hypothesis tests are routine. Here's the decision rule for accepting or rejecting a null hypothesis:

  • If the P-value is bigger than the significance level, accept the null hypothesis.
  • If the P-value is equal to or smaller than the significance level, reject the null hypothesis.

A "big" P-value for a source of variation (Factor A, Factor B, or the AB interaction) indicates that the source did not have a statistically significant effect on the dependent variable. A "small" P-value indicates that the source did have a statistically significant effect on the dependent variable.

Magnitude of Effect

The hypothesis tests tell us whether sources of variation in our experiment had a statistically significant effect on the dependent variable, but the tests do not address the magnitude of the effect. Here's the issue:

  • When the sample size is large, you may find that even small effects (indicated by a small F ratio) are statistically significant.
  • When the sample size is small, you may find that even big effects are not statistically significant.

With this in mind, it is customary to supplement analysis of variance with an appropriate measure of effect size. Eta squared (η2) is one such measure. Eta squared is the proportion of variance in the dependent variable that is explained by a treatment effect. The eta squared formula for a main effect or an interaction effect is:

η2 = SSEFFECT / SST

where SSEFFECT is the sum of squares for a particular treatment effect (i.e., Factor A, Factor B, or the AB interaction) and SST is the total sum of squares.

ANOVA Summary Table

It is traditional to summarize ANOVA results in an analysis of variance table. Here, filled with hypothetical data, is an analysis of variance table for a 2 x 3 full factorial experiment.

Analysis of Variance Table

Source SS df MS F P
A 13,225 p - 1 = 1 13,225 9.45 0.004
B 2450 q - 1 = 2 1225 0.88 0.427
AB 9650 (p-1)(q-1) = 2 4825 3.45 0.045
WG 42,000 pq(n - 1) = 30 1400
Total 67,325 npq - 1 = 35

In this experiment, Factors A and B were fixed effects; so F ratios were computed with that in mind. There were two levels of Factor A, so p equals two. And there were three levels of Factor B, so q equals three. And finally, each treatment group had six subjects, so n equal six. The table shows critical outputs for each main effect and for the AB interaction effect.

Many of the table entries are derived from the sum of squares (SS) and degrees of freedom (df), based on the following formulas:

MSA = SSA / dfA = 13,225/1 = 13,225

MSB = SSB / dfB = 2450/2 = 1225

MSAB = SSAB / dfAB = 9650/2 = 4825

MSWG = MSWG / dfWG = 42,000/30 = 1400


FA = MSA / MSWG = 13,225/1400 = 9.45

FB = MSB / MSWG = 2450/1400 = 0.88

FAB = MSAB / MSWG = 9650/1400 = 3.45

where MSA is mean square for Factor A, MSB is mean square for Factor B, MSAB is mean square for the AB interaction, MSWG is the within-groups mean square, FA is the F ratio for Factor A, FB is the F ratio for Factor B, and FAB is the F ratio for the AB interaction.

An ANOVA table provides all the information an experimenter needs to (1) test hypotheses and (2) assess the magnitude of treatment effects.

Hypothesis Tests

The P-value (shown in the last column of the ANOVA table) is the probability that an F statistic would be more extreme (bigger) than the F ratio shown in the table, assuming the null hypothesis is true. When a P-value for a main effect or an interaction effect is bigger than the significance level, we accept the null hypothesis for the effect; when it is smaller, we reject the null hypothesis.

Source SS df MS F P
A 13,225 p - 1 = 1 13,225 9.45 0.004
B 2450 q - 1 = 2 1225 0.88 0.427
AB 9650 (p-1)(q-1) = 2 4825 3.45 0.045
WG 42,000 pq(n - 1) = 30 1400
Total 67,325 npq - 1 = 35

For example, based on the F ratios in the table above, we can draw the following conclusions:

  • The P-value for Factor A is 0.004. Since the P-value is smaller than the significance level (0.05), we reject the null hypothesis that Factor A has no effect on the dependent variable.
  • The P-value for Factor B is 0.427. Since the P-value is bigger than the significance level (0.05), we cannot reject the null hypothesis that Factor B has no effect on the dependent variable.
  • The P-value for the AB interaction is 0.045. Since the P-value is smaller than the significance level (0.05), we reject the null hypothesis of no significant interaction. That is, we conclude that the effect of each factor varies, depending on the level of the other factor.

Magnitude of Effects

To assess the strength of a treatment effect, an experimenter can compute eta squared (η2). The computation is easy, using sum of squares entries from an ANOVA table in the formula below:

η2 = SSEFFECT / SST

where SSEFFECT is the sum of squares for the main or interaction effect being tested and SST is the total sum of squares.

To illustrate how to this works, let's compute η2 for the main effects and the interaction effect in the ANOVA table below:

Source SS df MS F P
A 100 2 50 2.5 0.09
B 180 3 60 3 0.04
AB 300 6 50 2.5 0.03
WG 960 48 20
Total 1540 59

Based on the table entries, here are the computations for eta squared (η2):

η2A = SSA / SST = 100 / 1540 = 0.065

η2B = SSB / SST = 180 / 1540 = 0.117

η2AB = SSAB / SST = 300 / 1540 = 0.195

Conclusion: In this experiment, Factor A accounted for 6.5% of the variance in the dependent variable; Factor B, 11.7% of the variance; and the interaction effect, 19.5% of the variance.

Test Your Understanding

Problem 1

In the ANOVA table shown below, the P-value for Factor B is missing. Assuming Factors A and B are fixed effects, what is the correct entry for the missing P-value?

Source SS df MS F P
A 300 4 75 5.00 0.002
B 100 2 50 3.33 ???
AB 200 8 25 1.67 0.12
WG 900 60 15
Total 1500 74

Hint: Stat Trek's F Distribution Calculator may be helpful.

(A) 0.01
(B) 0.04
(C) 0.20
(D) 0.97
(E) 0.99

Solution

The correct answer is (B).

A P-value is the probability of obtaining a result more extreme (bigger) than the observed F ratio, assuming the null hypothesis is true. From the ANOVA table, we know the following:

  • The observed value of the F ratio for Factor B is 3.33.
  • Since Factor B is a fixed effect, the F ratio (FB) was computed from the following formula:

    FB = F(v1, v2) = MSB / MSWG

  • The degrees of freedom (v1) for the Factor B mean square (MSB) is 2.
  • The degrees of freedom (v2) for the within-groups mean square (MSWG) is 60.

Therefore, the P-value we are looking for is the probability that an F with 2 and 60 degrees of freedom is greater than 3.33. We want to know:

P [ F(2, 60) > 3.33 ]

Now, we are ready to use the F Distribution Calculator. We enter the degrees of freedom (v1 = 2) for the Factor B mean square, the degrees of freedom (v2 = 60) for the within-groups mean square, and the F value (3.33) into the calculator; and hit the Calculate button.

F-Distribution calculator shows cumulative probability equals 0.04.

The calculator reports that the probability that F is greater than 3.33 equals about 0.04. Hence, the correct P-value is 0.04.


Problem 2

In the ANOVA table shown below, the P-value for Factor B is missing. Assuming Factors A and B are random effects, what is the correct entry for the missing P-value?

Source SS df MS F P
A 300 4 75 3.00 0.09
B 100 2 50 2.00 ???
AB 200 8 25 1.67 0.12
WG 900 60 15
Total 1500 74

Hint: Stat Trek's F Distribution Calculator may be helpful.

(A) 0.01
(B) 0.04
(C) 0.20
(D) 0.80
(E) 0.96

Solution

The correct answer is (C).

A P-value is the probability of obtaining a result more extreme (bigger) than the observed F ratio, assuming the null hypothesis is true. From the ANOVA table, we know the following:

  • The observed value of the F ratio for Factor B is 2.0.
  • Since Factor B is a random effect, the F ratio (FB) was computed from the following formula:

    FB = F(v1, v2) = MSB / MSAB

  • The degrees of freedom (v1) for the Factor B mean square (MSB) is 2.
  • The degrees of freedom (v2) for the AB interaction (MSAB) is 8.

Therefore, the P-value we are looking for is the probability that an F with 2 and 8 degrees of freedom is greater than 2.0. We want to know:

P [ F(2, 8) > 2.0 ]

Now, we are ready to use the F Distribution Calculator. We enter the degrees of freedom (v1 = 2) for the Factor B mean square, the degrees of freedom (v2 = 8) for the AB interaction mean square, and the F value (2.0) into the calculator; and hit the Calculate button.

F-Distribution calculator shows cumulative probability equals 0.04.

The calculator reports that the probability that F is greater than 2.0 equals about 0.20. Hence, the correct P-value is 0.20.