Type I Errors and Type II Errors

Posted on November 24, 2009 by

0


One of the most common questions I get in our Six Sigma courses is how to understand the difference between Type I and Type II errors. I’ll offer an explanation and then describe how I personally remember which error is which.

Type I and Type II errors are mistakes made when performing null hypothesis testing. Remember that your null hypothesis is always that there is no relationship among your variables. We have to decide whether or not the null hypothesis is true, and we’re either right or wrong. This leads to four possible outcomes. Two of these four possible outcomes are correct decisions—your data leads you to conclude that there is no relationship among your variables and there really isn’t OR your data leads you to conclude that there is a relationship and indeed there really is. The other two potential outcomes are errors.

A Type I error means that your data leads you to conclude that there is a relationship among variables in your dataset, but in the real world there is no relationship. For example, your data shows that eating ice cream is related to living longer, but it really isn’t. A Type II error occurs when your data leads you to conclude that there is no relationship among your variables, but in the real world there is. For example, your data says there is no difference in lifespan between people who eat only junk food and those who eat very healthy, but in the real world there may be this difference.

How do I personally keep these two errors straight? What I remember is that researchers almost always want to find relationships among their variables—they want to find potential causes, interesting patterns, and things that are worth publishing. So the number one error that researchers are biased to making is saying that there is a relationship among their variables when there really isn’t it. This is the Type I error.

Tagged: ,
Posted in: Six Sigma