Two-Factor Analysis of Variance: Example
In this lesson, we use analysis of variance to analyze results from a balanced, two-factor, full-factorial experiment; and we show how to interpret the results of our analysis. We'll analyze results for a fixed-effects model, a random-effects model, and a mixed model.
Note: Computations for analysis of variance are usually handled by a software package. For this example, however, we will do the computations "manually", since the gory details have educational value.
Problem Statement
As part of a full factorial experiment, a researcher tests the effect of Factor A and Factor B on a continuous variable. The design calls for two levels of Factor A and three levels of Factor B - six treatment groups in all. The researcher selects 30 subjects randomly from a larger population and randomly assigns five subjects to each treatment group.
The researcher collects one dependent variable score from each subject, as shown in the table below:
Table 1. Dependent Variable Scores
A1 | A2 | ||||
---|---|---|---|---|---|
B1 | B2 | B3 | B1 | B2 | B3 |
1 2 3 2 1 |
1 2 4 3 1 |
2 3 4 3 2 |
2 3 4 3 2 |
2 3 5 2 2 |
3 4 5 4 3 |
A1 | A2 | ||||
---|---|---|---|---|---|
B1 | B2 | B3 | B1 | B2 | B3 |
1 2 3 2 1 |
1 2 4 3 1 |
2 3 4 3 2 |
2 3 4 3 2 |
2 3 5 2 2 |
3 4 5 4 3 |
The treatment levels represent all the levels of interest to the experimenter, so this experiment uses a fixed-effects model to select treatment levels for study.
In conducting this experiment, the researcher has two research questions:
- Do the independent variables have a significant effect on the dependent variable?
- How strong is the effect of independent variables on the dependent variable?
To answer these questions, the researcher uses analysis of variance.
Is ANOVA the Right Technique?
Before you crunch the first number in analysis of variance, you must be sure that analysis of variance is the correct technique. That means you need to ask two questions:
- Is the experimental design compatible with analysis of variance?
- Does the dataset satisfy the critical assumptions required for two-factor analysis of variance?
Let's address both of those questions.
Experimental Design
As we discussed in the previous lesson (see Analysis With Full Factorial Experiments), analysis of variance is appropriate with a balanced, completely randomized, full factorial experiment; so we can check the experimental design box.
Critical Assumptions
We also learned in the previous lesson that analysis of variance with full factorial experiments makes three critical assumptions:
- Independence. The dependent variable score for each experimental unit is independent of the score for any other unit.
- Normality. In the population, dependent variable scores are normally distributed within treatment groups.
- Equality of variance. In the population, the variance of dependent variable scores in each treatment group is equal. (Equality of variance is also known as homogeneity of variance or homoscedasticity.)
Therefore, before we implement analysis of variance with this study, we need to make sure our dataset is consistent with all three assumptions.
Independence of Scores
The assumption of independence is the most important assumption. When that assumption is violated, the resulting statistical tests can be misleading.
The independence assumption is satisfied by the design of the study, which features random selection of subjects and random assignment to treatment groups. Randomization tends to distribute effects of extraneous variables evenly across groups.
Normal Distributions in Groups
Violations of normality can be a problem when sample size is small, as it is in this study. Therefore, it is important to be on the lookout for any indication of non-normality.
There are many ways to check for normality. On this website, we describe three at: How to Test for Normality: Three Simple Tests. Given the small sample size, our best option for testing normality is to look at the following descriptive statistics:
- Central tendency. The mean and the median are summary measures used to describe central tendency - the most "typical" value in a set of values. With a normal distribution, the mean is equal to the median.
- Skewness. Skewness is a measure of the asymmetry of a probability distribution. If observations are equally distributed around the mean, the skewness value is zero; otherwise, the skewness value is positive or negative. As a rule of thumb, skewness between -2 and +2 is consistent with a normal distribution.
- Kurtosis. Kurtosis is a measure of whether observations cluster around the mean of the distribution or in the tails of the distribution. The normal distribution has a kurtosis value of zero. As a rule of thumb, kurtosis between -2 and +2 is consistent with a normal distribution.
The table below shows the mean, median, skewness, and kurtosis for each group from our study.
Table 2. Descriptive Statistics
A1B1 | A1B2 | A1B3 | A2B1 | A2B2 | A2B3 | |
---|---|---|---|---|---|---|
Mean | 1.8 | 2.2 | 2.8 | 2.8 | 3.2 | 3.8 |
Med | 2 | 2 | 3 | 3 | 3 | 4 |
Rng | 2 | 3 | 2 | 2 | 3 | 2 |
Skew | 0.51 | 0.54 | 0.51 | 0.51 | 0.54 | 0.51 |
Kurt | -0.61 | -1.49 | -0.61 | -0.61 | -1.48 | -0.61 |
A1B1 | A1B2 | A1B3 | A2B1 | A2B2 | A2B3 | |
---|---|---|---|---|---|---|
Mean | 1.8 | 2.2 | 2.8 | 2.8 | 3.2 | 3.8 |
Median | 2 | 2 | 3 | 3 | 3 | 4 |
Range | 2 | 3 | 2 | 2 | 3 | 2 |
Skew | 0.51 | 0.54 | 0.51 | 0.51 | 0.54 | 0.51 |
Kurt | -0.61 | -1.49 | -0.61 | -0.61 | -1.48 | -0.61 |
In all six groups, the difference between the mean and median looks small (relative to the range). And skewness and kurtosis measures are consistent with a normal distribution (i.e., between -2 and +2). These are crude tests, but they provide some confidence for the assumption of normality in each group.
Note: With Excel, you can easily compute the descriptive statistics for treatment groups. To see how, go to: How to Test for Normality: Example 1.
Homogeneity of Variance
When the normality of variance assumption is satisfied, you can use Hartley's Fmax test to test for homogeneity of variance. Here's how to implement the test:
- Step 1. Compute the sample variance ( s2j ) for each treatment group.
kΣj=1( X i, j - X j ) 2
s2j = ( n j - 1 ) where X i, j is the score for observation i in Group j , X j is the mean of Group j, and n j is the number of observations in Group j.
Here is the variance ( s2j ) for each group in the study.
Table 3. Sample Variance
A1 A2 B1 B2 B3 B1 B2 B3 0.7 1.7 0.7 0.7 1.7 0.7 A1 A2 B1 B2 B3 B1 B2 B3 0.7 1.7 0.7 0.7 1.7 0.7 - Step 2. Compute an F ratio from the following formula:
FRATIO = s2MAX / s2MIN
FRATIO = 1.7 / 0.7
FRATIO = 2.43
where s2MAX is the largest group variance, and s2MIN is the smallest group variance.
- Step 3. Compute degrees of freedom ( df ).
df = n - 1
df = 5 - 1
df = 4
where n is the largest sample size in any group.
- Step 4. Based on the degrees of freedom ( 4 ) and the number of groups ( 6 ),
Find the critical F value from the Table of Critical F Values for Hartley's Fmax Test.
From the table, we see that the critical Fmax value is 29.5.
Note: The critical F values in the table are based on a significance level of 0.05.
- Step 5. Compare the observed F ratio computed in Step 2 to the critical
F value recovered from the Fmax table in Step 4. If the F ratio is smaller than the Fmax table value,
the variances are homogeneous. Otherwise, the variances are heterogeneous.
Here, the F ratio (2.43) is smaller than the Fmax value (29.5), so we conclude that the variances are homogeneous.
Note: Other tests, such as Bartlett's test, can also test for homogeneity of variance.
Analysis of Variance
Having confirmed that the critical assumptions are tenable, we can proceed with analysis of variance. That means taking the following steps:
- Specify a mathematical model to describe how main effects and interaction effects influence the dependent variable.
- Write statistical hypotheses to be tested by experimental data.
- Specify a significance level for a hypothesis test.
- Compute the grand mean and the mean scores for each treatment group.
- Compute sums of squares for each effect in the model.
- Find the degrees of freedom associated with each effect in the model.
- Based on sums of squares and degrees of freedom, compute mean squares for each effect in the model.
- Find the expected value of the mean squares for each effect in the model.
- Compute a test statistic for each effect, based on observed mean squares and their expected values.
- Find the P value for each test statistic.
- Accept or reject the null hypothesis for each effect, based on the P value and the significance level.
- Assess the magnitude of effect, based on sums of squares.
Now, let's execute each step, one-by-one, with our sample experiment.
Mathematical Model
For every experimental design, there is a mathematical model that accounts for all of the independent and extraneous variables that affect the dependent variable.
For example, here is the fixed-effects mathematical model for a two-factor, completely randomized, full-factorial experiment:
X i j m = μ + α i + β j + αβ i j + ε m ( ij )
where X i j m is the dependent variable score for subject m in treatment group ij, μ is the population mean, α i is the main effect of Factor A at level i; β j is the main effect of Factor B at level j; αβ i j is the interaction effect of Factor A at level i and Factor B at level j; and ε m ( ij ) is the effect of all other extraneous variables on subject m in treatment group ij.
For this model, it is assumed that ε m ( ij ) is normally and independently distributed with a mean of zero and a variance of σε2. The mean ( μ ) is constant.
Note: The parentheses in ε m ( ij ) indicate that subjects are nested under treatment groups. When a subject is assigned to only one treatment group, we say that the subject is nested under a treatment.
Statistical Hypotheses
With a full factorial experiment, it is possible to test all main effects and all interaction effects. For example, here are the null hypotheses (H0) and alternative hypotheses (H1) for each effect in a two-factor full factorial experiment.
H0: α i = 0 for all i | H0: β j = 0 for all j | H0: αβ ij = 0 for all ij |
H1: α i ≠ 0 for some i | H1: β j ≠ 0 for some j | H1: αβ ij ≠ 0 for some ij |
Significance Level
The significance level (also known as alpha or α) is the probability of rejecting the null hypothesis when it is actually true. The significance level for an experiment is specified by the experimenter, before data collection begins.
Experimenters often choose significance levels of 0.05 or 0.01. For this experiment, let's use a significance level of 0.05.
Mean Scores
Analysis of variance for a full factorial experiment begins by computing a grand mean, marginal means, and group means. Here are computations for the various means, based on dependent variable scores from Table 1:
- Grand mean. The grand mean (X) is the mean of all observations,
computed as follows:
N =pΣi=1qΣj=1n = pqnX = ( 1 / N )pΣi=1qΣj=1nΣm=1( X i j m )
X = 2.7
- Marginal means for Factor A. The mean for level i of Factor A
( X i . . ) is computed as follows:
X i . . = ( 1 / q )qΣj=1nΣm=1( X i j m )
X 1 . . = 2.27
X 2 . . = 3.13
- Marginal means for Factor B. The mean for level j of Factor B
( X . j . ) is computed as follows:
X . j . = ( 1 / p )pΣi=1nΣm=1( X i j m )
X . 1 . = 2.3
X . 2 . = 2.5
X . 3 . = 3.3
- Group means. The mean of all observations in group i j
( X i j . ) is computed as follows:
X i j . = ( 1 / n )nΣm=1( X i j m )
X 1 1 . = 1.8
X 1 2 . = 2.2
X 1 3 . = 2.8
X 2 1 . = 2.8
X 2 2 . = 2.8
X 2 3 . = 3.8
In the equations above, N is the total sample size across all treatment groups; n is the sample size in a single treatment group, p is the number of levels of Factor A, and q is the number of levels of Factor B.
Sums of Squares
A sum of squares is the sum of squared deviations from a mean score. Two-way analysis of variance makes use of five sums of squares. Below, we compute all five sums of squares:
- Factor A sum of squares. The sum of squares for Factor A (SSA) measures variation of the marginal means
of Factor A ( X i )
around the grand mean ( X ). It can be computed from the following formula:
SSA = nqpΣi=1( X i - X )2
SSA = 5.633
- Factor B sum of squares. The sum of squares for Factor B (SSB) measures variation of the marginal means
of Factor B ( X j )
around the grand mean ( X ). It can be computed from the following formula:
SSB = npqΣj=1( X j - X )2
SSB = 5.600
- Interaction sum of squares. The sum of squares for the interaction between Factor A and Factor B (SSAB)
can be computed from the following formula:
SSAB = npΣi=1qΣj=1( X i j - X i - X j + X )2
SSAB = 0.267
- Within-groups sum of squares. The within-groups sum of squares (SSW) measures variation of all scores
( X i j m ) around their respective group means
( X i j ).
It can be computed from the following formula:
SSW =pΣi=1qΣj=1nΣm=1( X i j m - X i j )2
SSW = 24.80
- Total sum of squares. The total sum of squares (SST) measures variation of all scores
( X i j m ) around the grand mean
( X ).
It can be computed from the following formula:
SST =pΣi=1qΣj=1nΣm=1( X i j m - X )2
SST = 36.30
In the formulas above, n is the sample size in each treatment group, p is the number of levels of Factor A, and q is the number of levels of Factor B.
Degrees of Freedom
The term degrees of freedom (df) refers to the number of independent sample points used to compute a statistic minus the number of parameters estimated from the sample points.
The degrees of freedom used to compute the various sums of squares for a balanced, two-way factorial experiment are shown in the table below:
Sum of squares | Degrees of freedom |
---|---|
Factor A | p - 1 = 2 - 1 = 1 |
Factor B | q - 1 = 3 - 1 = 2 |
AB interaction | ( p - 1 )( q - 1) = 1 * 2 = 2 |
Within groups | pq( n - 1 ) = 2 * 3 * 4 = 24 |
Total | npq - 1 = 2 * 3 * 5 - 1 = 29 |
Mean Squares
A mean square is an estimate of population variance. It is computed by dividing a sum of squares (SS) by its corresponding degrees of freedom (df), as shown below:
MS = SS / df
To conduct analysis of variance with a two-factor, full factorial experiment, we are interested in four mean squares:
- Factor A mean square. The Factor A mean square ( MSA ) measures
variation due to the main effect of Factor A. It can be computed as follows:
MSA = SSA / dfA = 5.63 / 1 = 5.63
- Factor B mean square. The Factor B mean square ( MSB ) measures
variation due to the main effect of Factor B. It can be computed as follows:
MSB = SSB / dfB = 5.6 / 2 = 2.8
- Interaction mean square. The mean square for the AB interaction measures variation due to
the AB interaction effect. It can be computed as follows:
MSAB = SSAB / dfAB = 0.267 / 2 = 0.1335
- Within groups mean square. The within-groups mean square ( MSWG ) measures
variation due to differences among experimental units within the same treatment group. It can be computed as follows:
MSWG = SSW / dfWG = 24.8 / 24 = 1.03
Expected Value
The expected value of a mean square is the average value of the mean square over a large number of experiments.
Statisticians have derived formulas for the expected value of mean squares for balanced, two-factor, full factorial experiments. The expected values differ, depending on whether the experiment uses all fixed factors, all random factors, or a mix of fixed and random factors. The table below shows the expected value of mean squares for a balanced, two-factor, full factorial experiment when both factors are fixed:
Mean square | Expected value |
---|---|
MSA | σ2WG + nqσ2A |
MSB | σ2WG + npσ2B |
MSAB | σ2WG + nσ2AB |
MSWG | σ2WG |
In the table above, n is the sample size in each treatment group, p is the number of levels for Factor A, q is the number of levels for Factor B, σ2A is the variance of main effects due to Factor A, σ2B is the variance of main effects due to Factor B, σ2AB is the variance due to interaction effects, and σ2WG is the variance due to extraneous variables (also known as variance due to experimental error).
Test Statistics
Suppose we want to test the significance of a main effect or the interaction effect in a two-factor, full factorial experiment. We can use the mean squares to define a test statistic F as follows:
F(v1, v2) = MSEFFECT 1 / MSEFFECT 2
where MSEFFECT 1 is the mean square for the effect we want to test; MSEFFECT 2 is an appropriate mean square, based on the expected value of mean squares; v1 is the degrees of freedom for MSEFFECT 1 ; and v2 is the degrees of freedom for MSEFFECT 2.
The expected value of the numerator of the F ratio should be identical to the expected value of the denominator, except for one thing: The numerator should have an extra term that includes the effect being tested.
Fixed-Effects Model
The table below shows how to construct F ratios when an experiment uses a fixed-effects model.
Table 1. Fixed-Effects Model
Effect | F ratio | Degrees of freedom | |
---|---|---|---|
v1 | v2 | ||
A |
MSA
MSWG
|
p-1 | pq(n-1) |
B |
MSB
MSWG
|
q-1 | pq(n-1) |
AB |
MSAB
MSWG
|
(p-1)(q-1) | pq(n-1) |
Using formulas from the table above, we can compute an F ratio for each treatment effect, as shown below:
FA = F(v1, v2) = F(1, 24) = MSA / MSWG = 5.63 / 1.03 = 5.47
FB = F(v1, v2) = F(2, 24) = MSB / MSWG = 2.8 / 1.03 = 2.72
FAB = F(v1, v2) = F(2, 24) = MSAB / MSWG = 0.1335 / 1.03 = 0.13
How to Interpret F Ratios
For each F ratio in the table above, notice that numerator should equal the denominator when the variation due to the source effect ( σ2 SOURCE ) is zero (i.e., when the source does not affect the dependent variable). And the numerator should be bigger than the denominator when the variation due to the source effect is not zero (i.e., when the source does affect the dependent variable)
Defined in this way, each F ratio is a convenient measure that we can use to test the null hypothesis about the effect of a source (Factor A, Factor B, or the AB interaction) on the dependent variable. Here's how to conduct the test:
- When the F ratio is close to one, the numerator of the F ratio is approximately equal to the denominator. This indicates that the source did not affect the dependent variable, so we cannot reject the null hypothesis.
- When the F ratio is significantly greater than one, the numerator is bigger than the denominator. This indicates that the source did affect the dependent variable, so we must reject the null hypothesis.
What does it mean for the F ratio to be significantly greater than one? To answer that question, we need to talk about the P-value.
P-Value
In an experiment, a P-value is the probability of obtaining a result more extreme than the observed experimental outcome, assuming the null hypothesis is true.
With analysis of variance, the F ratio is the observed experimental outcome that we are interested in. So, the P-value would be the probability that an F ratio would be more extreme (i.e., bigger) than the actual F ratio computed from experimental data.
The F ratios defined for analysis of variance follow the F distribution. Therefore, we can use Stat Trek's F Distribution Calculator to find the probability that an F statistic will be bigger than the actual F ratios observed in the experiment.
To illustrate how this can be done, we'll find the P-value for the F ratio associated Factor A. Recall that the F ratio associated Factor A was defined as follows:
F ratio = |
MSA
MSWG
|
To find the P-value for this F ratio, we enter three inputs into the F Distribution Calculator: the degrees of freedom (1) for the Factor A mean square, the degrees of freedom (24) for the within-groups mean square, and the observed F statistic (5.47) into the calculator; then, click the Calculate button.
From the calculator, we see that the P ( F > 5.47 ) equals about 0.03. Therefore, the P-Value for Factor A is 0.03. Following the same procedure, we can find that the P-value for Factor B is 0.09; and the P-value for the AB interaction is 0.88.
Hypothesis Test
Recall that we specified a significance level 0.05 for this experiment. Once you know the significance level and the P-value, the hypothesis test is routine. Here's the decision rule for accepting or rejecting the null hypothesis:
- If the P-value is bigger than the significance level, accept the null hypothesis.
- If the P-value is equal to or smaller than the significance level, reject the null hypothesis.
When we apply these decision rules to this experiment, here are the conclusions:
- Since the P-value (0.03) for Factor A is smaller than the significance level (0.05), we reject the null hypothesis that Factor A has no effect on the dependent variable.
- Since the P-value (0.09) for Factor B is bigger than the significance level (0.05), we cannot reject the null hypothesis that Factor B has no effect on the dependent variable.
- Since the P-value (0.88) for the AB interaction is bigger than the significance level (0.05), we cannot reject the null hypothesis that the AB interaction has no effect on the dependent variable.
Magnitude of Effect
The hypothesis test tells us whether a main effect or an interaction effect has a statistically significant effect on the dependent variable, but it does not address the magnitude (i.e., strength) of the effect. Here's the issue:
- When the sample size is large, you may find that even small effects are statistically significant.
- When the sample size is small, you may find that even big effects are not statistically significant.
With this in mind, it is customary to supplement analysis of variance with an appropriate measure of the magnitude of each treatment effect. Eta squared (η2) is one such measure. Eta squared is the proportion of variance in the dependent variable that is explained by a treatment effect. The eta squared formula for analysis of variance is:
η2 = SSEFFECT / SST
where SSEFFECT is the sum of squares for a treatment effect and SST is the total sum of squares.
Given this formula, we can compute eta squared for each treatment effect in this experiment, as shown below:
η2A = SSA / SST = 5.63 / 36.3 = 0.155
η2B = SSB / SST = 5.60 / 36.3 = 0.154
η2AB = SSAB / SST = 0.27 / 36.3 = 0.007
Thus, 15.5% of the variance in the dependent variable can be explained by Factor A; 15.4%, by Factor B; and 0.7% by the AB interaction.
ANOVA Summary Table
It is traditional to summarize ANOVA results in an analysis of variance table. The analysis that we just conducted provides all of the information that we need to produce the following ANOVA summary table:
Analysis of Variance Table
Source | SS | df | MS | F | P |
---|---|---|---|---|---|
A | 5.63 | 1 | 5.63 | 5.45 | 0.03 |
B | 5.6 | 2 | 2.8 | 2.71 | 0.09 |
AB | 0.27 | 2 | 0.135 | 0.13 | 0.88 |
WG | 24.8 | 24 | 1.033 | ||
Total | 36.3 | 29 |
This ANOVA table allows any researcher to interpret the results of the experiment, at a glance.
The P-value (shown in the last column of the ANOVA table) is the probability that an F statistic would be more extreme (bigger) than the F statistic shown in the table, assuming the null hypothesis is true. When the P-value is bigger than the significance level, we accept the null hypothesis; when it is smaller, we reject it.
To assess the strength of a treatment effect, an experimenter can compute eta squared (η2). The computation is easy, using sum of squares entries from the ANOVA table, as shown below:
η2 = SSEFFECT / SST
where SSEFFECT is the sum of squares for a treatment effect and SST is the total sum of squares.
Fixed vs. Random Factors
In the analysis above, both factors in the experiment were fixed factors. How would the analysis change if one or both factors were random factors?
As it turns out, there are only a few differences between analysis of variance with fixed factors and analysis of variance with random factors.
- With some effects, the expected mean square will be different for a fixed factor than for a random factor.
- When an expected mean square for an effect is different, the F ratio for that effect will also be different.
- When an F ratio is different, the P-value will also be different.
In short, analysis of variance with fixed effects is exactly the same as analysis of variance with random effects up to the point where expected mean squares come into the picture. Therefore, only a few adjustments are required to conduct analysis of variance with random factors. Specifically, we need to (1) find expected mean squares for random factors, (2) recompute F ratios based on the expected mean squares, and (3) find P-values for the new F ratios.
To illustrate what is going on, let's repeat the analysis of variance for our experiment with a random-effects model and with a mixed model.
Random-Effects Model
Assume that both factors in our experiment are random factors. We know that sums of squares, degrees of freedom for effects, and sample estimates of mean squares do not change for fixed factors versus random factors. Therefore, we can use the values that we computed earlier for fixed factors in a new analysis for random factors. Those values are shown in the ANOVA summary table below:
Analysis of Variance Table: Random Effects
Source | SS | df | MS | F | P |
---|---|---|---|---|---|
A | 5.63 | 1 | 5.63 | ??? | ??? |
B | 5.6 | 2 | 2.8 | ??? | ??? |
AB | 0.27 | 2 | 0.135 | ??? | ??? |
WG | 24.8 | 24 | 1.033 | ||
Total | 36.3 | 29 |
The ANOVA table has some gaps, indicated by questions marks. To fill in the gaps, we need to:
- Find expected value of mean squares for random factors.
- Compute F ratios, based on expected mean squares.
- Find P-values for each F ratio.
Expected Value
The table below shows the expected value of mean squares for a balanced, two-factor, full factorial experiment when both factors are random:
Mean square | Expected value |
---|---|
MSA | σ2WG + nσ2AB + nqσ2A |
MSB | σ2WG + nσ2AB + npσ2B |
MSAB | σ2WG + nσ2AB |
MSWG | σ2WG |
F Ratio
The F ratio for each effect is defined in such a way that the expected value of the ratio equals one when σ2EFFECT equals zero. The table below shows how to construct F ratios for each effect when an experiment uses a random-effects model.
Effect | F ratio | Degrees of freedom | |
---|---|---|---|
v1 | v2 | ||
A |
MSA
MSAB
|
p-1 | (p-1)(q-1) |
B |
MSB
MSAB
|
q-1 | (p-1)(q-1) |
AB |
MSAB
MSWG
|
(p-1)(q-1) | pq(n-1) |
Effect | F ratio | Degrees of freedom | |
---|---|---|---|
v1 | v2 | ||
A |
MSA
MSAB
|
p-1 | (p-1)(q-1) |
B |
MSB
MSAB
|
q-1 | (p-1)(q-1) |
AB |
MSAB
MSWG
|
(p-1)(q-1) | pq(n-1) |
For each effect, v1 is the degrees of freedom for the numerator of the F ratio; and v2 is the degrees of freedom for the denominator of the ratio.
Applying formulas from the table, we can compute F ratios for the main effects and the interaction effect, as shown below:
FA = F(v1, v2) = F(1, 2) = MSA/MSAB = 5.63 / 0.135 = 41.7
FB = F(v1, v2) = F(2, 2) = MSB/MSAB = 2.8 / 0.135 = 20.7
FAB = F(v1, v2) = F(2, 24) = MSAB/MSWG = 0.135 / 1.033 = 0.13
P-Values
At this point, we know the value of each F ratio; and we know the degrees of freedom associated with each F ratio. Therefore, we can use Stat Trek's F Distribution Calculator to find the probability that an F statistic will be bigger than an actual F ratio observed in the experiment.
To illustrate how this can be done, we'll find the P-value for the F ratio associated Factor A. We enter three inputs into the F Distribution Calculator: the degrees of freedom v1 (1), the degrees of freedom v2 (2), and the observed F statistic (41.7).
From the calculator, we see that the P ( F > 41.7 ) equals about 0.02. Therefore, the P-value for Factor A is 0.02. Following the same procedure, we can find that the P-value for Factor B is 0.05; and the P-value for the AB interaction is 0.88.
ANOVA Summary Table
Based on the analysis above, we can fill in the gaps that originally existed in our ANOVA table. Here is the table with the gaps filled in.
Analysis of Variance Table: Random Effects
Source | SS | df | MS | F | P |
---|---|---|---|---|---|
A | 5.63 | 1 | 5.63 | 41.7 | 0.02 |
B | 5.6 | 2 | 2.8 | 20.7 | 0.05 |
AB | 0.27 | 2 | 0.135 | 0.13 | 0.88 |
WG | 24.8 | 24 | 1.033 | ||
Total | 36.3 | 29 |
Mixed Model
A mixed model describes an experiment in which at least one factor is a fixed factor, and at least one factor is a random factor. In our experiment, suppose we assume that Factor A is a fixed factor, and Factor B is a random factor.
We know that sums of squares, degrees of freedom for effects, and sample estimates of mean squares do not change for fixed factors versus random factors. Therefore, we can use the values that we computed earlier for fixed factors in a new analysis for random factors. Those values are shown in the ANOVA summary table below:
Analysis of Variance Table: Random Effects
Source | SS | df | MS | F | P |
---|---|---|---|---|---|
A | 5.63 | 1 | 5.63 | ??? | ??? |
B | 5.6 | 2 | 2.8 | ??? | ??? |
AB | 0.27 | 2 | 0.135 | ??? | ??? |
WG | 24.8 | 24 | 1.033 | ||
Total | 36.3 | 29 |
The ANOVA table has some gaps, indicated by questions marks. To fill in the gaps, we need to:
- Find expected value of mean squares for fixed factors and random factors.
- Compute F ratios, based on expected mean squares.
- Find P-values for each F ratio.
Expected Value
The table below shows the expected value of mean squares for a balanced, two-factor, full factorial experiment when Factor A is fixed and Factor B is random:
Mean square | Expected value |
---|---|
MSA | σ2WG + nσ2AB + nqσ2A |
MSB | σ2WG + npσ2B |
MSAB | σ2WG + nσ2AB |
MSWG | σ2WG |
F Ratio
The F ratio for each effect is defined in such a way that the expected value of the ratio equals one when σ2EFFECT equals zero. The table below shows how to construct F ratios for each effect when Factor A is a fixed effect, and Factor B is a random effect.
Effect | F ratio | Degrees of freedom | |
---|---|---|---|
v1 | v2 | ||
A |
MSA
MSAB
|
p-1 | (p-1)(q-1) |
B |
MSB
MSWG
|
q-1 | pq(n-1) |
AB |
MSAB
MSWG
|
(p-1)(q-1) | pq(n-1) |
Effect | F ratio | Degrees of freedom | |
---|---|---|---|
v1 | v2 | ||
A |
MSA
MSAB
|
p-1 | (p-1)(q-1) |
B |
MSB
MSWG
|
q-1 | pq(n-1) |
AB |
MSAB
MSWG
|
(p-1)(q-1) | pq(n-1) |
For each effect, v1 is the degrees of freedom for the numerator of the F ratio; and v2 is the degrees of freedom for the denominator of the ratio.
Applying formulas from the table, we can compute F ratios for the main effects and the interaction effect, as shown below:
FA = F(v1, v2) = F(1, 2) = MSA/MSAB = 5.63 / 0.135 = 41.7
FB = F(v1, v2) = F(2, 24) = MSB/MSWG = 2.8 / 1.033 = 2.71
FAB = F(v1, v2) = F(2, 24) = MSAB/MSWG = 0.135 / 1.033 = 0.13
P-Values
At this point, we know the value of each F ratio; and we know the degrees of freedom associated with each F ratio. Therefore, we can use Stat Trek's F Distribution Calculator to find the probability that an F statistic will be bigger than an actual F ratio observed in the experiment.
To illustrate how this can be done, we'll find the P-value for the F ratio associated Factor B (the random factor). We enter three inputs into the F Distribution Calculator: the degrees of freedom v1 (2), the degrees of freedom v2 (24), and the observed F statistic (2.71).
From the calculator, we see that the P ( F > 2.71 ) equals about 0.09. Therefore, the P-value for Factor A is 0.09. Following the same procedure, we can find that the P-value for Factor A is 0.02; and the P-value for the AB interaction is 0.88.
ANOVA Summary Table
Based on the analysis above, we can fill in the gaps that originally existed in our ANOVA table. Here is the table with the gaps filled in.
Analysis of Variance Table: Mixed Model
Source | SS | df | MS | F | P |
---|---|---|---|---|---|
A | 5.63 | 1 | 5.63 | 41.7 | 0.02 |
B | 5.6 | 2 | 2.8 | 2.71 | 0.09 |
AB | 0.27 | 2 | 0.135 | 0.13 | 0.88 |
WG | 24.8 | 24 | 1.033 | ||
Total | 36.3 | 29 |