### Analysis of Variance

#### Introduction

#### Completely randomized design

#### Full factorial design

#### Randomized block design

#### Calculators

#### Appendices

### Analysis of Variance:

Table of Contents

#### Introduction

#### Completely randomized design

#### Full factorial design

#### Randomized block design

#### Calculators

#### Appendices

# Experimental Design

There is a close relationship between experimental design and statistical analysis. The way that an experiment is designed determines the types of analyses that can be appropriately conducted.

In this lesson, we review aspects of experimental design that a researcher must understand in order to properly interpret experimental data with analysis of variance.

## What Is an Experiment?

An **experiment** is a procedure carried out to investigate cause-and-effect relationships.
For example, the experimenter may manipulate one or more variables (independent variables) to assess the
effect on another variable (the dependent variable).

Conclusions are reached on the basis of data. If the dependent variable is unaffected by changes in independent variables, we conclude that there is no causal relationship between the dependent variable and the independent variables. On the other hand, if the dependent variable is affected, we conclude that a causal relationship exists.

## What Is Experimental Design?

The term **experimental design** refers to a plan for conducting the experiment in such a way
that research results will be valid and easy to interpret. This plan includes three interrelated activities:

- Write statistical hypotheses.
- Collect data.
- Analyze data.

Let's look in a little more detail at these three activities.

## Statistical Hypotheses

A **statistical hypothesis** is an assumption about the value of a population
parameter.
There are two types of statistical hypotheses:

**Null hypothesis:**The null hypothesis is the statement subjected to a statistical test in an experiment. It is denoted by H_{0}. For example, consider the following null hypothesis:H

_{0:}μ_{i}= μ_{j}Here, μ

_{i}is the population mean for group*i*, and μ_{j}is the population mean for group*j*. This hypothesis makes the assumption that population means in groups*i*and*j*are equal.**Alternative hypothesis:**The alternative hypothesis is the hypothesis that is tenable if the null hypothesis is rejected. It is denoted by H_{1}or H_{a}. For example, consider the following alternative hypothesis:H

_{1:}μ_{i}≠ μ_{j}This hypothesis makes the assumption that population means in groups

*i*and*j*are not equal.

The null hypothesis and the alternative hypothesis are written to be mutually exclusive. If one is true, the other is not.

Experiments rely on sample data to test the null hypothesis. If experimental results, based on sample statistics, are consistent with the null hypothesis, the null hypothesis cannot be rejected; otherwise, the null hypothesis is rejected in favor of the alternative hypothesis.

## Data Collection

The data collection phase of experimental design is all about methodology - how to run the experiment to produce valid, relevant statistics that can be used to test a null hypothesis.

### Identify Variables

Every experiment exists to examine a cause-and-effect relationship. With respect to the relationship under investigation, an experimental design needs to account for three types of variables:

**Dependent variable.**The dependent variable is the outcome being measured, the effect in a cause-and-effect relationship.**Independent variables.**An independent variable is a variable that is explicitly included in an experiment, because the experimenter believes it is a potential cause in a cause-and-effect relationship.**Extraneous variables.**An extraneous variable is any other variable that could affect the dependent variable, but is not explicitly included in the experiment.

**Note:** The independent variables that are explicitly included in an
experiment are also called **factors**.

### Define Treatment Groups

In an experiment, treatment groups are built around factors, each group defined by a unique combination of factor levels.

For example, suppose that a drug company wants to test a new cholesterol medication. The dependent variable is total cholesterol level. One independent variable is dosage. And, since some drugs affect men and women differently, the researchers include an second independent variable - gender.

This experiment has two factors - dosage and gender. The dosage factor has three levels (0 mg, 50 mg, and 100 mg), and the gender factor has two levels (male and female). Given this combination of factors and levels we can define six unique treatment groups, as shown below:

Dose | |||
---|---|---|---|

0 mg | 50 mg | 100 mg | |

Male | Group 1 | Group 2 | Group 3 |

Female | Group 4 | Group 5 | Group 6 |

### Select Factor Levels

A factor in an experiment can be described by the way in which factor levels are chosen for inclusion in the experiment:

**Fixed factor.**The experiment includes all factor levels about which inferences are to be made.**Random factor.**The experiment includes a random sample of levels from a much bigger population of factor levels.

Experiments can be described by the presence or absence of fixed or random factors:

**Fixed-effects model.**All of the factors in the experiment are fixed.**Random-effects model.**All of the factors in the experiment are random.**Mixed model.**At least one factor in the experiment is fixed, and at least one factor is random

The use of fixed factors versus random factors has implications for how experimental results are interpreted. With a fixed factor, results apply only to factor levels that are explicitly included in the experiment. With a random factor, results apply to every factor level from the population.

For example, consider the blood pressure experiment described above. Suppose the experimenter only wanted to test the effect of three particular dosage levels - 0 mg, 50 mg, and 100 mg. He would include those dosage levels in the experiment, and any research conclusions would apply to only those particular dosage levels. This would be an example of a fixed-effects model.

On the other hand, suppose the experimenter wanted to test the effect of any dosage level.
Since it is not practical to test *every* dosage level, the experimenter might choose
three dosage levels at random from the population of possible dosage levels. Any research conclusions would apply not only to the
selected dosage levels, but also to other dosage levels that were not included explicitly in the experiment.
This would be an example of a random-effects model.

### Select Experimental Units

The experimental unit is the entity that provides values for the dependent variable. Depending on the needs of the study, an experimental unit may be a person, animal, plant, product - anything. For example, in the cholesterol study described above, researchers measured cholesterol level (the dependent variable) of people; so the experimental units were people.

**Note:** When the experimental units are people, they are often
referred to as **subjects**. Some researchers prefer the term **participant**,
because subject has a connotation that the person is subservient.

If time and money were no object, you would include the entire population of experimental units in your experiment. In the real world, where there is never enough time or money, you will usually select a sample of experimental units from the population.

Ultimately, you want to use sample data to make inferences about population parameters. With that in mind, it is best practice to draw a random sample of experimental units from the population. This provides a defensible, statistical basis for generalizing from sample findings to the larger population.

Finally, it is important to consider sample size. The larger the sample, the greater the statistical power; and the more confidence you can have in your results.

### Assign Experimental Units to Treatments

Having selected a sample of experimental units, we need to assign each unit to one or more treatment groups. Here are two ways that you might assign experimental units to groups:

**Independent groups design.**Each experimental unit is randomly assigned to one, and only one, treatment group. This is also known as a**between-subjects design**.**Repeated measures design.**Experimental units are assigned to more than one treatment group. This is also known as a**within-subjects design**.

### Control for Extraneous Variables

Extraneous variables can mask effects of independent variables. Therefore, a good experimental design controls potential effects of extraneous variables. Here are a few strategies for controlling extraneous variables:

**Randomization**Assign subjects randomly to treatment groups. This tends to distribute effects of extraneous variables evenly across groups.**Repeated measures design.**To control for individual differences between subjects (age, attitude, religion, etc.), assign each subject to multiple treatments. This strategy is called using subjects as their own control.**Counterbalancing.**In repeated measures designs, randomize or reverse the order of treatments among subjects to control for order effects (e.g., fatigue, practice).

As we describe specific experimental designs in upcoming lessons, we will point out the strategies that are used with each design to control the confounding effects of extraneous variables.

## Data Analysis

Researchers follow a formal process to determine whether to reject a null hypothesis, based on sample data. This process, called hypothesis testing, consists of five steps:

**Formulate hypotheses.**This involves stating the null and alternative hypotheses. Because the hypotheses are mutually exclusive, if one is true, the other must be false.**Choose the test statistic.**This involves specifying the statistic that will be used to assess the validity of the null hypothesis. Typically, in analysis of variance studies, researchers compute a F ratio to test hypotheses.**Compute a P-value, based on sample data.**Suppose the observed test statistic is equal to*S*. The P-value is the probability that the experiment would yield a test statistic as extreme as*S*, assuming the null hypothesis is true.**Choose a significance level.**The significance level, denoted by α, is the probability of rejecting the null hypothesis when it is really true. Researchers often choose a significance level of 0.05 or 0.01.**Test the null hypothesis.**If the P-value is smaller than the significance level, we reject the null hypothesis; if it is larger, we fail to reject.

A good experimental design includes a precise plan for data analysis. Before the first data point is collected, a researcher should know how experimental data will be processed to accept or reject the null hypotheses.

## Test Your Understanding

**Problem 1**

In a well-designed experiment, which of the following statements is true?

I. The null hypothesis and the alternative hypothesis are mutually exclusive.

II. The null hypothesis is subjected to statistical test.

III. The alternative hypothesis is subjected to statistical test.

(A) I only

(B) II only

(C) III only

(D) I and II

(E) I and III

**Solution**

The correct answer is (D). The null hypothesis and the alternative hypothesis are mutually exclusive; if one is true, the other must be false. Only the null hypothesis is subjected to statistical test. When the null hypothesis is accepted, the alternative hypothesis is rejected. The alternative hypothesis is not tested explicitly.

**Problem 2**

In a true experiment, each subject is assigned to only one treatment group. What type of design is this?

(A) Independent groups design

(B) Repeated measures design

(C) Within-subjects design

(D) None of the above

(E) All of the above

**Solution**

The correct answer is (A). In an independent groups design, each experimental unit is assigned to one treatment group. In the other two designs, each experimental unit is assigned to more than one treatment group.

**Problem 3**

In a true experiment, which of the following does the experimenter control?

(A) How to manipulate independent variables.

(B) How to assign subjects to treatment conditions.

(C) How to control for extraneous variables.

(D) None of the above

(E) All of the above

**Solution**

The correct answer is (E). The experimenter chooses factors and factor levels for the experiment, assigns experimental units to treatment groups (often through a random process), and implements strategies (randomization, counterbalancing, etc.) to control the influence of extraneous variables.