Measures of Variation in Statistics

These are my thoughts on measures of variation in statistics.

 

Science and Math Books

 

 

 

Range

To understand variation, we begin by introducing the range. The range of a set of data values is the difference between the max data value and the min data value. The range uses only the maximum and the minimum data values, so it is very sensitive to extreme values. It is not resistant. Because the range uses only the max and min values, it does not take every value into account and therefore does not truly reflect the variation among all of the data values.

\[ \text{Range = max value - min value} \]

 

Range Rule of Thumb

The range rule of thumb is a quick way to ballpark the standard deviation.

25%  *  range of data

 

Standard Deviation of a Sample

The standard deviation is the measure of variation most commonly used in statistics. It is a measure of how much data values deviate away from the mean. The standard deviation found from sample data is a statistic denoted by \{\text{s}\}. 

 

The symbol for sample standard variation is \(s\).

The symbol for population standard deviation is \(\sigma\)

The symbol for sample variance is \(s^2\)

The symbol for population variance is \(\sigma^{2}\)

 

The standard deviation is a measure of how much data values deviate from the mean. The value of the standard deviation is never negative. It is zero only when all of the data values are exactly the same. Larger values indicate greater amounts of variation. The standard deviation can increase dramatically with one or more outliers. The units of the standard deviation are the same as the units of the original data values.

 

Here are the steps to finding standard deviation:

  1. Find the mean of your data values
  2. Subtract the mean from each individual sample value
  3. Square each of the deviations obtained from the previous step
  4. Add all of the squares obtained from previous step
  5. Divide the total from previous step by n-1, which is 1 less than the total number of data values present
  6. Find the square root of the result of the previous step.

 

Standard Deviation of a Population

A different formula is used to find the standard deviation of a population. We use the value of N instead of n-1. When using a calculator, make sure which kind of deviation it is giving you. The variance of a set of values is a measure of variation equal to the square of the standard deviation. 

 

The units of the variance are the squares of the units of the original data values. The value of the variance can increase dramatically with the inclusion of outliers. So, the variance is not resistant. The value of the variance is never negative. It is zero only when all of the data values are the same number. 

 

In measuring variation in a set of sample data, it makes sense to begin with the individual amounts by which values deviate from the mean. It makes sense to combine those deviations into one number that can serve as a measure of variation. We do not want to add the variations because that will give us a zero answer. Instead, we want to use the absolute values of the deviations. When we find the mean of that sum, we get the mean absolute deviation, which is the mean distance of the data from the mean.

 

Computation of the mean absolute deviation uses absolute values, so it uses an operation that is not algebraic. The use of absolute values would be simple but it would create algebraic difficulties in inferential statistics. The standard deviation has the advantage of using only algebraic operations. Because it is based on the square root of a sum of squares, the standard deviation closely parallels distance formulas found in algebra. There are many instances where a statistical procedure is based on a similar sum of squares. Consequently, instead of using absolute values, we square all deviations so that they are nonnegative and those squares are used to calculate the standard deviation. 

 

After finding all of the individual values we combine them by finding their sum. We then divide by n-1 because there are only n-1 values that can be assigned without constraint. With a given mean, we can use any numbers for the first n-1 values, but the last value will then be automatically determined. With division by n-1, sample variances tend to center around the value of the population variance. With division by n, sample variances tend to underestimate the value of the population variance.

 

A concept helpful in interpreting the value of the standard deviation is the empirical rule. This rule states that for data sets having a distribution that is approximately bell-shaped, the following properties apply:

  1. 68 percent of all values fall within 1 standard deviation of the mean
  2. 95 percent of all values fall within 2 standard deviations of the mean
  3. 99.7 percent of all values fall within 3 standard deviations of the mean

 

Another concept helpful in understanding a value of a standard deviation is Chebyshev’s theorem. The empirical rule applies only to data sets with bell-shaped distributions, but Chebyshev’s theorem applies to any data set. Unfortunately, results are only approximate. Because the results are lower limits, this theorem has limited usefulness. 

 

If the population mean is \(\mu\) and the population standard deviation is \(\sigma\), then the range rule of thumb for identifying significant values is as follows:

Significantly low values are \(\mu - 2\sigma\) or lower

Significantly high values are \(\mu + 2\sigma\) or higher.

Insignificant values are between the previous two values.