The textbook is a living document as we have discussed. Brother Saunders developed the first edition. Most of the time you will get his version. At times I will make some updates to specific pages.

Zip Downloads

Alternative Format Zip Downloads

I am building the textbook material using a slightly different format as we move through the semester. You can just ignore this section if you like the format above.

Week 3 Folder

HypothesisTesting.Rmd

I edited the last portion of the HypothesisTesting.Rmd file that you get in your week 3 folder. Here is a link to full page and the .Rmd. The text below is the new portion that replaces the last few sections of the original.


#### Managing Decision Errors

When the $p$-value approaches zero, one of two things must be occurring. Either an extremely rare event has happened or the null hypothesis is incorrect. Since the second option, that the null hypothesis is incorrect, is the more plausible option, we reject the null hypothesis in favor of the alternative whenever the $p$-value is close to zero. It is important to remember that rejecting the null hypothesis could however be a mistake.

<div style="padding-left:30px; padding-right:10%;">

| &nbsp; | $H_0$ True | $H_0$ False |
|--------|------------|-------------|
| **Reject** $H_0$ | Type I Error | Correct Decision |
| **Accept** $H_0$ | Correct Decision | Type II Error |

</div>

<br />

#### Type I Error, Significance Level, Confidence and $\alpha$

A **Type I Error** is defined as rejecting the null hypothesis when it is actually true. (Throwing away truth.) The **significance level**, $\alpha$, of a hypothesis test controls the probability of a Type I Error. The typical value of $\alpha = 0.05$ came from tradition and is a somewhat arbitrary value. Any value from 0 to 1 could be used for $\alpha$. When deciding on the level of $\alpha$ for a particular study it is important to remember that as $\alpha$ increases, the probability of a Type I Error increases, and the probability of a Type II Error decreases.  When $\alpha$ gets smaller, the probability of a Type I Error gets smaller, while the probability of a Type II Error increases.  **Confidence** is defined as $1-\alpha$ or the opposite of a Type I error.  That is the probability of accepting the NULL when it is in fact true.

<br />


#### Type II Errors, $\beta$, and Power

It is also possible to make a **Type II Error**, which is defined as failing to reject the null hypothesis when it is actually false. (Failing to move to truth.)  The probability of a Type II Error, $\beta$, is often unknown. However, practitioners often make an assumption about a detectable difference that is desired which then allows $\beta$ to be prescribed much like $\alpha$.  In essence, the detectable difference prescribes a fixed value for $H_a$. We can then talk about the **power** of of a hypothesis test, which is 1 minus the probability of a Type II Error, $\beta$. See [Statistical Power](https://en.wikipedia.org/wiki/Statistical_power) in Wikipedia for a starting source if your are interested.  [This website](http://rpsychologist.com/d3/NHST/){target="blank"} provides a novel interactive visualization to help you understand power. It does require a little background on [Cohen's D](http://rpsychologist.com/d3/cohend/). 

<br />


#### Developing Statistical Tests

When we decide our Null and Alternative hypotheses we are also responsible to define the distributional model that we will use to make comparisons.  In traditional introductory statistics classes no attempt is made to provide other statistical distributions that work outside of the normality assumption.  In this class we will provide non-parametric methods that do not require the assumption of normality (or any other parametric model).  These non-parametric tests are very useful but they often require the sacrifice of statistical **power**.  In other words, non-parametric methods often will not be as "powerful" (or "good") at detecting differences between groups.

<br />