An ANOVA is for testing the equality of several means simultaneously. A single quantitative response variable is required with one or more qualitative explanatory variables, i.e., factors.
Each experimental unit is assigned to exactly one factor-level combination. Another way to say this is “one measurement per individual” (no repeated measures) and “equal numbers of individuals per group”.
An ANOVA is only appropriate when all of the following are satisfied.
The sample(s) of data can be considered to be representative of their population(s).
The data is normally distributed in each group. (This can safely be assumed to be satisfied when the residuals from the ANOVA can be assumed to be normally distributed when seen in a Q-Q Plot.)
The population variance of each group can be assumed to be the same. (This can be safely assumed to be satisfied when the residuals from the ANOVA show constant variance, i.e., are similarly vertically spread out in a Residuals versus fitted-values plot.)
Hypotheses
For a One-way ANOVA
\[ H_0: \mu_1 = \mu_2 = \ldots = \mu \] \[ H_a: \mu_i \neq \mu \ \text{for at least one} \ i \]
Mathematical Model
A typical model for a one-way ANOVA is of the form \[ Y_{ij} = \mu_i + \epsilon_{ij} \] where \(\mu_i\) is the mean for level (group) \(i\), and \(\epsilon_{ij} \sim N(0,\sigma^2)\) is the error term for each point \(j\) within level (group) \(i\).
Console Help Command: ?aov()
myaov
is some name you come up with to store the
results of the aov()
test.Y
must be a “numeric” vector of the quantitative
response variable.X
is a qualitative variable (should have
class(X)
equal to factor
or
character
. If it does not, use as.factor(X)
inside the aov(Y ~ as.factor(X),...)
command.YourDataSet
is the name of your data set.Perform the ANOVA
myaov <- aov(Y ~ X, data=YourDataSet)
summary(myaov)
Diagnose ANOVA Assumptions
par(mfrow=c(1,2))
plot(myaov, which=1:2)
Example Code
Hover your mouse over the example codes to learn more.
Perform the ANOVA
chick.aov <-
Saves the results of the ANOVA test as an object named
‘chick.aov’. aov( ‘aov()’ is a function in R used to perform the
ANOVA. weight Y is ‘weight’, which is a numeric variable from the
chickwts dataset. ~ ‘~’ is the tilde symbol used to separate the Y and
X in a model formula. feed, X is ‘feed’, which is a qualitative variable in the
chickwts dataset, or more specifically, a factor with six levels:
“casein”, “horsebean”, and so on… Use str(chickwts) to see this.
data = chickwts) ‘chickwts’ is a dataset in R.
summary(
‘summary()’ shows the results of the ANOVA. chick.aov) ‘chick.aov’ is
the name of the ANOVA.
Press Enter to run the code if you have typed
it in yourself. You can also click here to view the output.
Click to View Output Click to View
Output.
Diagnose the ANOVA
par( ‘par’ is a R
function that can be used to set or query graphical parameters.
mfrow = c(1,2)) The mfrow
parameter controls “multiple
frames on a row”. In this case, the c(1,2) specifies 1 row of 2 plots.
This will cause the two diagnostic plots to be placed side-by-side.
plot( ‘plot’ is a R function for the plotting of R
objects. chick.aov, ‘chick.aov’ is the name of the ANOVA.
which = 1:2) The which=1:2
selects “which” of 6
available plots we want to have graphed. In this case, 1 shows the
Residuals vs Fitted, and 2 shows the Normal QQ-plot. Both are needed to
check the ANOVA assumptions. Click to View Output Click to View Output.
Analysis of variance (ANOVA) is often applied to the scenario of testing for the equality of three or more means from (possibly) separate normal distributions of data. The normality assumption is required. No matter the sample size. If the distributions are skewed then a nonparametric test should be applied instead of ANOVA.
One-way ANOVA is when a completely randomized design is used with a single factor of interest. A typical mathematical model for a one-way ANOVA is of the form \[ Y_{ik} = \mu_i + \epsilon_{ik} \quad (\text{sometimes written}\ Y_{ik} = \mu + \alpha_i + \epsilon_{ik}) \] where \(\mu_i\) is the mean of each group (or level) \(i\) of a factor, and \(\epsilon_{ik}\sim N(0,\sigma^2)\) is the error term. The plot below demonstrates what these symbols represent. Note that the notation \(\epsilon_{ik}\sim N(0,\sigma^2)\) states that we are assuming the error term \(\epsilon_{ik}\) is normally distributed with a mean of 0 and a standard deviation of \(\sigma\).
The aim of ANOVA is to determine which hypothesis is more plausible, that the means of the different distributions are all equal (the null), or that at least one group mean differs (the alternative). Mathematically, \[ H_0: \mu_1 = \mu_2 = \ldots = \mu_m = \mu \] \[ H_a: \mu_i \neq \mu \quad \text{for at least one}\ i\in\{1,\ldots,m\}. \] In other words, the goal is to determine if it is more plausible that each of the \(m\) different samples (where each sample is of size \(n\)) came from the same normal distribution (this is what the null hypothesis claims) or that at least one of the samples (and possibly several or all) come from different normal distributions (this is what the alternative hypothesis claims).
The first figure below demonstrates what a given scenario might look like when all \(m=3\) samples of data are from the same normal distribution. In this case, the null hypothesis \(H_0\) is true. Notice that the variability of the sample means is smaller than the variability of the points.
The figure below shows what a given scenario might look like for \(m=3\) samples of data from three different normal distributions. In this case, the alternative hypothesis \(H_a\) is true. Notice that the variability of the sample means, i.e., \((\bar{x}_1,\bar{x}_2,\bar{x}_3)\), is greater than the variability of the points.
The above plots are useful in understanding the mathematical details behind ANOVA and why it is called analysis of variance. Recall that variance is a measure of the spread of data. When data is very spread out, the variance is large. When the data is close together, the variance is small. ANOVA utilizes two important variances, the between groups variance and the within groups variance.
Between groups variance–a measure of the variability in the sample means, the \(\bar{x}\)’s.
Within groups variance–a combined measure of the variability of the points within each sample.
The plot below combines the information from the previous plots for ease of reference. It emphasizes the fact that when the null hypothesis is true, the points should have a large variance (be really spread out) while the sample means are relatively close together. On the other hand, when the points are relative close together within each sample and the sample means have a large variance (are really spread out) then the alternative hypothesis is true. This is the theory behind analysis of variance, or ANOVA.
The ratio of the “between groups variation” to the “within groups variation” provides the test statistic for ANOVA. Note that the test statistic of ANOVA is an \(F\) statistic.
\[ F = \frac{\text{Between groups variation}}{\text{Within groups variation}} \]
It would be good to take a minute and review the \(F\) distribution. The \(p\)-value for ANOVA thus comes from an \(F\) distribution with parameters \(p_1 = m-1\) and \(p_2 = n-m\) where \(m\) is the number of samples and \(n\) is the total number of data points.
It is useful to take a few minutes and explain the word variance as well as mathematically define the terms “within group variance” and “between groups variance.”
Variance is a statistical measure of the variability in data. The square root of the variance is called the standard deviation and is by far the more typical measure of spread. This is because standard deviation is easier to interpret. However, mathematically speaking, the variance is the more important measurement.
As mentioned previously, the variance turns out to be the key to determining which hypothesis is the most plausible, \(H_0\) or \(H_a\), when several means are under consideration. There are two variances that are important for ANOVA, the “within groups variance” and the “between groups variance.”
Recall that the formula for computing a sample variance is given by \[ s^2 = \frac{\sum_{i=1}^n(x_i - \bar{x})^2}{n-1} \quad\leftarrow \frac{\text{sum of squares}}{\text{degrees of freedom}} \] This formula has a couple of important pieces that are so important they have been given special names. The \(n-1\) in the denominator of the formula is called the “degrees of freedom.” The other important part of this formula is the \(\sum_{i=1}^n(x_i - \bar{x})^2\), which is called the “sum of squared errors” or sometimes just the “sum of squares” or “SS” for short. Thus, the sample variance is calculated by computing a “sum of squares” and dividing this by the “degrees of freedom.”
It turns out that this general approach works for many different contexts. Specifically, it allows us to compute the “within groups variance” and the “between groups variance.” To introduce the mathematical definitions of these two variances, we need to introduce some new notation.
Let \(\bar{y}_{i\bullet}\) represent the sample mean of group \(i\) for \(i=1,\ldots,m\).
Let \(n_i\) denote the sample size in group \(i\).
Let \(\bar{y}_{\bullet\bullet}\) represent the sample mean of all \(n = n_1+n_2+\cdots+n_m\) data points.
The mathematical calculations for each of these variances is given as follows. \[ \text{Between groups variance} = \frac{\sum_{i=1}^m (\bar{y}_{i\bullet}-\bar{y}_{\bullet\bullet})^2}{m-1} \leftarrow \frac{\text{Between groups sum of squares}}{\text{Between groups degrees of freedom}} \] \[ \text{Within groups variance} = \frac{\sum_{i=1}^m\sum_{k=1}^{n_i}(y_{ik}-\bar{y}_{i\bullet})^2}{n-m} \leftarrow \frac{\text{Within groups sum of squares}}{\text{Within groups degrees of freedom}} \]
The following table provides three samples of data: A, B, and C. These samples were randomly generated from normal distributions using a computer. The true means \(\mu_1, \mu_2\), and \(\mu_3\) of the normal distributions are thus known, but withheld from you at this point of the example.
A | B | C |
---|---|---|
13.15457 | 13.17463 | 16.66831 |
12.65225 | 12.16277 | 15.54719 |
13.73061 | 12.76905 | 16.63074 |
14.43471 | 13.38524 | 15.06726 |
13.79728 | 12.02690 | 15.57534 |
13.88599 | 13.24651 | 15.99915 |
12.77753 | 12.58386 | 15.58995 |
13.81536 | 12.64615 | 16.99429 |
13.03635 | 12.52055 | 15.47153 |
14.26062 | 14.03566 | 16.13330 |
An ANOVA will be performed with the sample data to determine which hypothesis is more plausible: \[ H_0: \mu_1 = \mu_2 = \mu_3 = \mu \] \[ H_a: \mu_i \neq \mu \ \text{for at least one} \ i \in \{1,\ldots,m\} \]
To perform an ANOVA, we must compute the between groups variance and the within groups variance. This requires the Between groups sums of squares, within groups sums of squares, between groups degrees of freedom, and the within groups degrees of freedom. Note that to get the sums of squares, we first had to calculate \(\bar{y}_{1\bullet}\), \(\bar{y}_{2\bullet}\), \(\bar{y}_{3\bullet}\), and \(\bar{y}_{\bullet\bullet}\) where the 1, 2, 3 corresponds to Samples A, B, and C, respectively. After some work, we find these values to be \[ \bar{y}_{1\bullet} = 13.55, \quad \bar{y}_{2\bullet} = 12.86 \quad \bar{y}_{3\bullet} = 15.97 \] and \[ \bar{y}_{\bullet\bullet} = \frac{13.55+12.86+15.97}{3} = 14.13 \] Using these values we can then compute the between groups sum of squares and the within groups sum of squares according to the formulas stated previously. This process is very tedious and will not be demonstrated. Only the results are shown in the following table which summarizes all the important information.
Degrees of Freedom | Sum of Squares | Variance | F-value | p-value | |
---|---|---|---|---|---|
Between groups | 2 | 53.3 | 26.67 | 70.2 | 2e-11 |
Within groups | 27 | 10.3 | 0.38 |
In general, the ANOVA table is created by
Degrees of Freedom | Sum of Squares | Variance | F-value | p-value | |
---|---|---|---|---|---|
Between groups | \(m-1\) | \(\sum_{i=1}^m n_i(\bar{y}_{i\bullet}-\bar{y}_{\bullet\bullet})^2\) | \(\frac{\text{sum of squares}}{\text{degrees of freedom}}\) | \(\frac{\text{Between groups variance}}{\text{Within groups variance}}\) | \(F\)-distribution tail probability |
Within groups | \(n-m\) | \(\sum_{i=1}^m\sum_{k=1}^{n_i}(y_{ik}-\bar{y}_{i\bullet})^2\) | \(\frac{\text{sum of squares}}{\text{degrees of freedom}}\) |
The requirements for an analysis of variance (the assumptions of the test) are two-fold and concern only the error terms, the \(\epsilon_{ik}\).
The errors are normally distributed.
The variance of the errors is constant.
Both of these assumptions were stated in the mathematical model where we assumed that \(\epsilon_{ik}\sim N(0,\sigma^2)\).
To check that the ANOVA assumptions are satisfied, it is required to check the data in each group for normality using QQ-Plots. Also, the sample variance of each group must be relatively constant. The fastest way to check these two assumptions is by analyzing the residuals.
Examples: chickwts (One-way)
A two-way ANOVA is only appropriate when all of the following are satisfied.
The sample(s) of data can be considered to be representative of their population(s).
The data is normally distributed in each group. (This can safely be assumed to be satisfied when the residuals from the ANOVA are normally distributed.)
The population variance of each group can be assumed to be the same. (This can be safely assumed to be satisfied when the residuals from the ANOVA show constant variance, i.e., are similarly vertically spread out.)
Hypotheses
With a two-way ANOVA there are three sets of hypotheses. Writing out the hypotheses can be very involved depending on whether you use the official “effects model” notation (very mathematically correct) or a simplified “means model” notation (which isn’t very mathematically correct, but gets the idea across in an acceptable way).
X1
with
say, levels \(A\) and \(B\).\[ H_0: \mu_A = \mu_B = \mu \] \[ H_a: \mu_A \neq \mu_B \]
The second set of hypotheses are also a “one-way” set of
hypotheses, but for the second factor of the ANOVA. Factor:
X2
with say, levels \(C\),
\(D\), and \(E\). \[
H_0: \mu_C = \mu_D = \mu_E = \mu
\] \[
H_a: \mu_i \neq \mu \ \text{for at least one}\ i\in\{1=C,2=D,3=E\}
\]
The third set of hypotheses are the most interesting hypotheses in a two-way ANOVA. These are called the interaction hypotheses. They test to see if the levels of one of the factors, say \(X1\), impact \(Y\) differently for the differing levels of the other factor, \(X2\). The hypotheses read formally as
\[ H_0: \text{The effect of the first factor on Y} \\ \text{is the same for all levels of the second factor.} \] \[ H_a: \text{The effect of the first factor on Y is not the same} \\ \text{for all levels of the second factor.} \]
A mathematically correct way to state the two-way ANOVA model is with the equation \[ Y_{ijk} = \mu + \alpha_i + \beta_j + \alpha\beta_{ij} + \epsilon_{ijk} \] In this model, \(\mu\) is the grand mean (which is the average Y-value ignoring all information contained in the factors); \(\alpha_i\) is the first factor \(X1\) with levels \(A\) and \(B\) (though there could be more levels in \(X1\) depending on your data); \(\beta_j\) is the second factor with levels \(C\), \(D\), and \(E\) (though there could be fewer or more levels to this factor depending on your data); \(\alpha\beta_{ij}\) is the interaction of the two factors which has \(2\times3=6\) (may differ for your data) levels; and \(\epsilon_{ijk} \sim N(0,\sigma^2)\) is the normally distributed error term for each point \(k\) found within level \(i\) of \(X1\) and level \(j\) of \(X2\).
This model allows us to more formally state the hypotheses as
First factor \(X1\) having say, levels \(A\) and \(B\). \[ H_0: \alpha_A = \alpha_B = 0 \] \[ H_a: \alpha_i \neq 0 \ \text{for at least one}\ i\in\{1=A,2=B\} \]
Second factor \(X2\) with say, levels \(C\), \(D\), and \(E\). \[ H_0: \beta_C = \beta_D = \beta_E = 0 \] \[ H_a: \beta_j \neq 0 \ \text{for at least one}\ j\in\{1=C,2=D,3=E\} \]
Does the effect of the first factor (\(X1\)) change for the different levels of the second factor (\(X2\))? In other words, is there an interaction between the two factors \(X1\) and \(X2\)?
\[ H_0: \alpha\beta_{ij} = 0 \ \text{for all } i,j \] \[ H_a: \alpha\beta_{ij} \neq 0 \ \text{for at least one } i,j \]
Console Help Command: ?aov()
myaov <- aov(Y ~ X1+X2+X1:X2, data=YourDataSet)
Perform the ANOVA
summary(myaov)
View the ANOVA
Results
plot(myaov, which=1:2)
Check ANOVA
assumptions
myaov
is some name you come up with to store the
results of the aov()
test.Y
must be a “numeric” vector of the quantitative
response variable.X1
is a qualitative variable (should have
class(X1)
equal to factor
or
character
. If it does not, use as.factor(X1)
inside the aov()
command.X2
is a second qualitative variable that should also be
either a factor
or a character
vector.X3
, X4
, and so on could
also be added +
to the model if desired, but this would
create a three-way, or four-way ANOVA model, and so on.X1:X2
denotes the interaction of the factors
X1
and X2
. It is not required, but is usually
included.YourDataSet
is the name of your data set.Example Code
Hover your mouse over the example codes to learn more.
warp.aov <-
Saves the results of the ANOVA test as an object named
‘warp.aov’. aov( ‘aov()’ is a function in R used to perform the
ANOVA. breaks \(Y\) is ‘breaks’,
which is a numeric variable from the warpbreaks dataset.
~ ‘~’ is the
tilde symbol used to separate the left- and right-hand side in a model
formula. wool The first factor \(X1\) is ‘wool’, which is a qualitative
variable in the warpbreaks dataset. In this case, wool
is a
factor with two levels. Use str(warpbreaks) to see this.
+ tension The
second factor \(X2\) is ‘tension’,
which is another qualitative variable in the warpbreaks dataset. In this
case, tension
is a factor with three levels. Use
str(warpbreaks) to see this.
+ wool:tension, The interaction of the two
factors: wool and tension.
data = warpbreaks) ‘warpbreaks’ is a
dataset in R.
summary( ‘summary()’ shows the results of the ANOVA.
warp.aov)
‘warp.aov’ is the name of the ANOVA.
Press Enter to run the code if you have typed
it in yourself. You can also click here to view the output.
Click to View Output Click to View
Output.
par( ‘par’ is a R
function that can be used to set or query graphical parameters.
mfrow = c(1,2)) Parameter is being set. The first item inside the
combine function c() is the number of rows and the second is the number
of columns.
plot( ‘plot’ is a R function for the plotting of R
objects. warp.aov, ‘warp.aov’ is the name of the ANOVA.
which = 1:2) Will show the Residuals vs Fitted and the Normal
QQ-plot to check the ANOVA assumptions. Click to View
Output Click to View Output.
The hypotheses that can be tested in a two-way ANOVA that includes an interaction term are three-fold.
Hypotheses about \(\alpha\) where \(\alpha\) has \(m\) levels. \[ H_0: \alpha_1 = \alpha_2 = \ldots = \alpha_m = 0 \] \[ H_a: \alpha_i \neq 0\ \text{for at least one}\ i\in\{1,\ldots,m\} \]
Hypotheses about \(\beta\) where \(\beta\) has \(q\) levels. \[ H_0: \beta_1 = \beta_2 = \ldots = \beta_q = 0 \] \[ H_a: \beta_j \neq 0\ \text{for at least one}\ i\in\{1,\ldots,q\} \]
Hypotheses about the interaction term \(\alpha\beta\). \[ H_0: \text{the effect of one factor is the same across all levels of the other factor} \] \[ H_a: \text{the effect of one factor differs for at least one level of the other factor} \]
It turns out that more can be done with ANOVA than simply checking to see if the means of several groups differ. Reconsider the mathematical model of two-way ANOVA. \[ Y_{ijk} = \mu + \alpha_i + \beta_j + \alpha\beta_{ij} + \epsilon_{ijk} \] This model could be expanded to include any number of new terms in the model. The power of this approach is in the several questions (hypotheses) that can be posed to data simultaneously.
Examples: warpbreaks (Two-way), CO2 (three-way)
Repeated measures or other factors that group individuals into similar groups (blocks) are included in the study design.
A typical model for a block design is of the form \[ Y_{lijk} = \mu + B_l + \alpha_i + \beta_j + \alpha\beta_{ij} + \epsilon_{ijlk} \] where \(\mu\) is the grand mean, \(B_l\) is the blocking factor, \(\alpha_i\) is one factor with at least two levels, \(\beta_j\) is another factor with at least two levels, \(\alpha\beta_{ij}\) is the interaction of the two factors, and \(\epsilon_{ijlk} \sim N(0,\sigma^2)\) is the error term.
Only one block and one factor is required. Multiple blocks and multiple factors are allowed. It is not required to include interaction terms. The error term is always required.
Console Help Command: ?aov()
myaov <- aov(Y ~ Block+X1+X2+X1:X2, data=YourDataSet)
Perform the test
summary(myaov)
View the ANOVA
Results
plot(myaov, which=1:2)
Check ANOVA
assumptions
myaov
is some name you come up with to store the
results of the aov()
test.Y
must be a “numeric” vector of the quantitative
response variable.Block
is a qualitative variable that is not of direct
interest, but is included in the model to account for variability in the
data. It should have class(Block)
equal to either factor or
character. Use as.factor()
if it does not.X1
is a qualitative variable (should have
class(X1)
equal to factor
or
character
. If it does not, use as.factor(X1)
inside the aov()
command.X2
is a second qualitative variable that should also be
either a factor
or a character
vector. If it
does not, use as.factor(X2)
.C
, D
, and so on could
also be added +
to the model if desired.X1:X2
denotes the interaction of the factors
X1
and X2
. It is not required and should only
be included if the interaction term is of interest.YourDataSet
is the name of your data set.Examples: ChickWeight