A collection of items having something in common that is of interest to a researcher.
A measurement of the entire population that you would like to know (e.g., the percent of the voting population who will vote for a political candidate; the populations yearly income per person; etc.)
A subset of a population.
A measurement of the sample that parallels the population parameter (usually calculated so that it is just like the population parameter of interest, except calculated for the sample). Typically, the population parameter you are after tells you what statistic to calculate using data gathered from your sample.
Common-sense, Sort-of-Random Sample
A sample selected in such a way that the actual items selected are not predetermined. This is not the way statisticians think of "randomly selected." See "Random Sample", below.
Items selected from a population in a way that gives every member of the population an equal chance of being selected.
What does "equal chance of being selected" mean? If we were to repeat the selection procedure a huge number of times (huge relative to the size of the population), then each item in the population should be selected approximately the same fraction of the time.
Draw a sample from a population to gain information about it
Samples drawn truly at random (meaning, that every member of the population is equally likely to be chosen) tend to reflect the characteristics of the population from which they are drawn. However see "doing simulations to gain information. " below.
Perform simulations to gain information about the process of sampling
The characteristics of samples drawn at random will not always accurately reflect the populations characteristics because characteristics of samples will vary from sample to sample. This is called sampling variability. At times a sample may even reflect the populations characteristics very poorly. By simulating the process of drawing samples at random from various populations with known characteristics can gain insight into the process likelihood of producing estimates that are within certain percentage points of the populations actual percent.
Measuring the variability of a collection of sample percents
We can measure how variable are percents calculated from samples of a given size chosen at random from some population. One common measure is to determine the fraction of a collections sample percents that arfe within certain ranges of the population percent.
In statistics an event is said to be "unusual" (or "unlikely", or "rare") if over the long run we expect to see it a small fraction of the time. This way of thinkng about unusualness does not say anything about the event per se. Rather, it emphasizes our expectation that, for whatever reason and relative to certain circumstances, we expect to see it relatively infrequently. By convention statisticians have agreed that a small fraction of the time means 5% of the time or less. Thus, an unusual event is one that we expect will occur in 5% or less of a large number of times that it can occur.
A pattern that emerges only over the long run. It is impossible to predict the value of the next element in the sequence, only from the long term behavior can a pattern be discerned.
Example: The distribution of sample percents calculated from samples of a given size drawn randomly from a population. Because of sampling variability we cannot accurately predict what the outcome of any one sample will be. Only in the long run, after the random drawing of a large number of samples, does the distribution emerge.
Suppose a contractor is asked about the accuracy of one specific measurement taken by one specific carpenter. He or she DOES NOT KNOW how accurate that measurement is. The best he or she can say is something like, "When weve studied this issue in the past, 99% of all carpenters' measurements were within 5% of the items actual measure as determined by a much more accurate instrument, so I expect this one measurement to be pretty accurate."
Drawing one sample is like taking one measurement. The person paying for the sample is like the carpenter he or she is interested in the accuracy of THAT ONE SAMPLE. But you, the statistician, are like the contractor. You DONT KNOW how accurate this specific sample is. You can only justify your conclusions by appealing to what happens over the long run.
Suppose a customer, who has paid a lot of money for you to conduct a survey of 1600 people, asks, "How do you know that these results are accurate?"
The best you can say is something like "We took great care to ensure that we used a truly random selection process in choosing these 1600 people. In our simulations of sampling 1600 people at random from a large population, 99% of the samples were within 2 percentage points of the population's actual percent. And this was true regardless of the actual population percent. So, it is unlikely that this samples percent differs from the actual population percent by more than 2 percentage points. "Unlikely" means nothing about THIS SAMPLE, however. It just means that 1% of all samples will be farther than 2 percentage points from the actual measure.
What has "error" in it? The PROCESS
of drawing a sample. It is not that any sample statistic is wrong.
Rather, the idea of "error" is that a statistic computed
from a sample will deviate from the actual population parameter.
In inferential statistics, we are fundamentally concerned with what happens over the long run if we were to repeat a process a large number of times. The reason for this is that insight into what happens over the long run is how we judge the trustworthiness of any individual result.
Margin of Error
This is a technical term meant to convey an idea of how variable are sample statistics calculated from randomly drawn samples of particular a particular size.
Here is a table that shows results of simulations of drawing samples of various sizes.
|Sample Size||Number of Samples Drawn||Percent of Sample Percents within 1 percentage point of Population Percent||Percent of Sample Percents within 2 percentage points of Population Percent||Percent of Sample Percents within 3 percentage points of Population Percent||Percent of Sample Percents within 4 percentage points of Population Percent|
|100||2500||405/2500= 0.16||805/2500= 0.32||1172/2500= 0.47||1483/2500= 0.59|
|200||2500||612/2500 = 0.24||1147/2500 = 0.46||1590/2500 = 0.64||1914/2500 = 0.77|
|400||2500||840/2500 = 0.34||1544/2500 = 0.62||2002/2500 = 0.80||2284/2500 = 0.91|
|800||2500||1142/2500 = 0.46||1981/2500 = 0.79||2331/2500 = 0.93||2448/2500 = 0.98|
|1600||2500||1523/2500 = 0.61||2296/2500 = 0.92||2463/2500 = 0.99||2500/2500 = 1.00|
|3200||2500||1957/2500 = 0.78||2453/2500 = 0.98||2500/2500 = 1.00||2500/2500 = 1.00|
The idea of margin of error comes from asking the question, "In what range of the actual population percent will we find at least x% of the samples when we draw samples of size N at random from the population? "Margin of error" is not about how accurate is any one sample. Rather, it is about how accurate, over the long run, do samples tend to be?
The idea of margin of error being about how accurate, over the long run, samples tend to be can be turned into a procedure. Suppose we surveyed 400 people on a subject. We want to determine a "plus or minus" range for samples of size 400 so that we expect at least 60% of all such samples to fall within that range of the actual population percent. The table below suggests that around 62% of all 400-item samples will be within 2 percentage points of the actual population percent.
So, for a sample of 400 that came up with a sample percent of, say, 45%, we would say
45% of this sample of 400 people (said such and such). These results have a margin of error of ±2% with a confidence level of 62%
Please note that "±2%" IS NOT ABOUT THIS PARTICULAR SAMPLE!!! Rather, it specifies the long-term accuracy that we we have in mind the fraction of samples we expect to be within two percentage points of the population percent. "62%" is that fraction of all samples of 400 people that we expect, over the long run, to be within ±2% of the population parameter.
| It would be a big
mistake to think that this sample of 400 people is within
±2% of the population percent. We cannot make that claim.
We have no idea how accurate this sample is.
We are like the contractor who cannot judge the accuracy of a particular measurement by a particular carpenter who is at a site many miles away. Instead, all he can vouch for is that his carpenters are within a certain range of actual measurements a high percent of the time.
Simulation as grounds for justification
We justify our claims about samples' accuracy by assuming that the future will resemble the past. In our simulations we drew lots of 500-item random samples from populations having various splits. It turned out that 95% of all 500-person samples in any of the simulations fell within ±4 percentage points of the actual population percent. The actual populations actual percent did not matter. So, we expect that in the future, 95% of the time we draw a 500 items at random and calculate a percent, it will be within ±4 percentage points of the actual population percent.
In our simulations, ninety-five percent of all 800-item samples were within ±3 percentage points of the actual population percent. So, we expect that, in the future, 95% of the time that we draw 800 items at random and calculate a percent, it will be within ±3 percentage points of the actual population percent.