In Part One we raced through some basics, including the central limit theorem which is very handy.
This theorem tells us that even if we don’t know the type of distribution of a population we can say something very specific about the mean of a sample from that population (subject to some caveats).
Even though this theorem is very specific and useful it is not the easiest idea to grasp conceptually. So it is worth taking the time to think about it – before considering the caveats..
What do we know about Samples taken from Populations?
Usually we can’t measure the entire “population”. So we take a sample from the population. If we do it once and measure the mean (= “the average”) of that sample, then repeat again and again, and then plot the “distribution” of those means of the samples we get the graph on the right:
- and the graph on the right follows a normal distribution.
We know the probabilities associated with normal distributions, so this means that even if we have just ONE sampling distribution – the usual case – we can assess how likely it is that it comes from a specific population.
Here is a demonstration..
Using Matlab I created a population – the uniform distribution on the left of figure 1. Then I took a random sample from the population. Note that in real life you don’t know the details of the actual population, this is what you are trying to ascertain via statistical methods.
Each sample was 100 items. The test was made using the known probabilities of the normal distribution – “is this sample from a population of mean = 10?” And for a statistical test we can’t get a definite yes or no. We can only get a % likelihood. So a % threshold was set – you can see in figure 3, it was set at 95%.
Basically we are asking, “is there a 95% likelihood that this sample was drawn from a population with a mean of 10?“
The exercise of
a) extracting a random sample of 100 items, and
b) carrying out the test
- was repeated 100,000 times
Even though the sample was drawn from the actual population every single time, 5% of the time (4.95% to be precise) the test rejected the sample as coming from this population. This is to be expected. Statistical tests can only give answers in terms of a probability.
All we have done is confirmed that the test to 95% threshold gives us 95% correct answers and 5% incorrect answers. We do get incorrect answers. So why not increase the level of confidence in the test by increasing the threshold?
Ok, let’s try it. Let’s increase the threshold to 99%:
Nice. Now we only get just under 1% false rejections. We have improved our ability to tell whether or not a sample is drawn from a specific population!
Or have we?
Unfortunately there is no free lunch, especially in statistics.
Reducing the Risk of Rejecting one Error Increases the Risk of Accepting a Different Error..
In each and every case here we happen to know that we have drawn the sample from the population. Suppose we don’t know this? - The usual situation. The wider we cast the net, the more likely we are to assume that a sample is drawn from a population when in fact it is not.
I’ll show some examples shortly, but here is a good summary of the problem – along with the terminology of Type I and Type II errors – note that H0 is the hypothesis that the sample was drawn from the population in question:
What we have been doing by moving from 95% to 99% certainty is reducing the possibility of making a Type I error = thinking that the sample does not come from the population in question when it actually does. But in doing so we have been increasing the possibility of making a Type II error = thinking that the sample does come from the population when it does not.
So now let’s widen the Matlab example – we have added an alternative population and are drawing samples out of that as well.
So first – as before – we take samples from the main population and use the statistical test to find out how good it is at determining whether the samples do come from this population. Then second, we take samples from the alternative population and use the same test to see whether it makes the mistake of thinking the samples come from the original population.
As before, the % of false rejections is about what we would expect (note the number of tests was reduced to 10,000, for no particular reason) for a 95% significance test.
But now we see the % of “false acceptance” – where a sample from an alternative population is assessed to see whether it came from the original population. This error is – in this case – around 4%.
Now we increase the significance level to 99%:
Of course, the number of false rejections (type I error) has dropped to 1%. Excellent.
But the number of false accepts (type II error) has increased from 4% to 13%. Bad news.
Now let’s demonstrate why it is that we can’t know in advance how likely Type II errors are. In the following example, the mean of the alternative population has moved to 10.5 (from 10.3):
So no Type II errors. And we widen the test to 99%:
Still no Type II errors. So we widen the test further to 99.9%:
Finally we get some Type II errors. But because the population we are drawing the samples from is different enough from the population we are testing for (our hypothesis) the statistical test is very effective. The “power of the test” – in this case – is very high.
So, in summary, when you see a test “at the 5% significance level” =95%, or at the “1% significance level” = 99%, you have to understand that the more impressive the significance level, the more likely that a false result has been accepted.
Increasing the Sample Size
As the sample size increases the distribution of “the mean of the sample” gets smaller. I know, stats sounds like gobbledygook..
Let’s see a simple example to demonstrate what is a simple idea turned into incomprehensible English:
As you increase the size of the sample, you reduce the spread of the “sampling means” and this means that separating truth from fiction becomes easier.
It isn’t always possible to increase the sample size (for example, the monthly temperatures since satellites were introduced), but if it is possible, it makes it easier to find whether a sample is drawn from a given distribution or not.
Student T-test vs Normal Distribution test
What is a student t-test? It sounds like something “entry level” that serious people don’t bother with..
Actually it is a test developed by William Gossett just over 100 years ago and he had to write under a pen name because of his employer. Statistics was one of his employer’s trade secrets..
In the tests shown earlier we had to know the standard deviation of the population from which the sample was drawn. Often we don’t know this, and so we have a sample of unknown standard deviation – and we want to test the probability that it is drawn from a population of a certain mean.
The principle is the same, but the process is slightly different.
More in the next article, and hopefully we get to the concept of autocorrelation.
In all the basic elements we have covered so far we have assumed that each element in a sample and in a population is unrelated to any other element – independent events. Unfortunately, in the atmosphere and in climate, this assumption is not true (perhaps there are some circumstances where it is true, but generally it is not true).