Statistical significance is a key concept in data analysis that measures how likely a result is due to chance. A result is statistically significant if it is unlikely to have occurred by chance.
This is important because the result is more likely to be accurate, and you can use it to make informed decisions. Many fields use statistical significance, but it is most common in research.
Before we learn more, let’s get into some definitions to ensure you understand statistical significance fully.
Dovetail streamlines research to help you uncover and share actionable insights
When you conduct research, there's an inherent reality that there could be a tiny difference between 100% accuracy and your results. Even the best research will often have some insignificant mathematical mismatch.
Sometimes, you’ll hear adverts or people referring to 99.9% instead of 100%. That nominal amount is called an alpha level or significance level.
You may question how many alphas you’re willing to accept. 99.9% may be acceptable versus 100%, but what about 98% or 95%? How many times do you have to count a difference before it matters? We call this an alpha risk.
Imagine you wanted to know if your website would generate more sales if you changed the yellow color of your “flash sale” advertisement. Here, the original yellow color is called the null hypothesis.
Changing it to red is an alternative hypothesis. You could try additional alternatives to compare green, orange, and purple advertisements with red ones. The purpose is to check your risk of wrongly rejecting the original color.
A baseline is a measurement, calculation, or location that’s your basis for comparison.
If you have a basket of apples and you're interested in comparing them to oranges, the apples are your baseline from which to measure.
This is the measurement of the difference (strength and direction) between two variables.
This measures the amount of variation in a set of values. It’s the square root of the variance.
A small standard deviation means the values in the dataset are close to the mean (average), while a large standard deviation indicates the values are spread over a wider range.
Understanding statistical significance is essential to make sense of data. As more and more companies rely on data to take decisive action, it’s vital for managers to grasp statistical significance. It can help companies comprehend the results of their data more effectively.
Let's say you’re looking at the results of a customer satisfaction survey. If the results are statistically significant, they are unlikely to have occurred by chance alone.
This means that the results are more likely to be true, meaning you can be more confident using the information to improve your business.
We can see numerous examples of statistical significance in the world around us.
Medical research uses statistical significance. When scientists are testing a new drug, they often compare the results of the drug to a control group. The control group has a placebo or ineffective treatment. If the effects of the drug are much better than the outcome of the control group, the difference is statistically significant.
This concept applies to various fields, from social science to physics.
You determine statistical significance with a statistical test that compares the data of two variables to a null hypothesis. The results are statistically significant if there is strong evidence against the null hypothesis, as the effect is unlikely to have occurred by chance.
To determine whether a statistical result is significant, you need to compare the effect size to the variability in the data. You measure the amount of variability by the standard deviation. The result is significant if the effect size exceeds the standard deviation.
The p-value is the likelihood of seeing a difference between your baseline and what you compare it to. It's based on the assumption that there is a true difference between the two outcomes. If there isn't a difference, there's essentially no p-value.
However, that's what the p-value is for: Checking for any meaningful difference and measuring it.
A p-value less than 0.05 (5%) is typically considered statistically significant. It’s a threshold for accepting or rejecting the null hypothesis.
You calculate p-values with several factors, including the size of the difference between the groups you’re comparing and the data variability. The smaller the p-value, the stronger the evidence against the null hypothesis, and it’s more likely the difference is real and not just due to chance.
You use the p-value to decide whether or not to reject the null hypothesis. If the p-value is less than 0.05, you reject the null hypothesis. So, a p-value of 0.05 means a 5% chance of getting a result that is at least as extreme as the one you observed if the null hypothesis is true.
This question often comes up in statistics. The answer? It depends on the context. In some cases, 0.5 may be statistically significant, while it may not be in others.
For example, consider a study examining a new drug’s effect on people with a certain disease. 0.5 may be statistically significant if the new drug effectively treats the disease in this case. However, if the new drug is ineffective, 0.5 is not statistically significant.
A p-value helps companies decide whether there is enough evidence to say that a difference between two groups is due to chance or a real difference.
P-values are not the only factor to consider when deciding whether a result is statistically significant. You should also consider other factors, such as the effect size and the confidence interval. Still, researchers often use the p-value to decide whether a result is statistically significant.
Many different statistical significance tests exist, each with advantages and disadvantages. Your test choice depends on the data type you’re analyzing, the research question you’re addressing, and the test's assumptions.
Here are the types of statistical significance tests:
This test is for when the standard deviation is unknown, such as if you wanted to determine whether two different diets lead to different mean weight loss amounts.
Chi-square compares what you observe with what you expect. Researchers typically use this test when they expect two or more groups of data to be different from each other, and their goal is to determine whether or not the difference is statistically significant.
With this test, you calculate the chi-squared statistic for each data group. You compare the chi-squared statistic to a critical value. If the chi-squared statistic is greater than the critical value, the difference between the groups is statistically significant.
A Fisher's exact statistical significance test determines whether two variables are independent or not when the sample size is small. The data must be in a contingency table that shows the frequencies of two variables in different combinations to conduct a Fisher's exact test.
The Wilcoxon rank-sum test is a statistical significance test that compares two data groups with abnormal distributions. The test is based on the ranks of the data, and the null hypothesis is that the two groups are from the same population.
This test is similar to the Wilcoxon rank-sum test but compares the medians of two independent samples.
This test compares the means of three or more groups to determine if there is a significant difference between any of them.
This test compares the mean of a sample to the population mean when you know the population standard deviation.
This test compares the proportions of two dependent (related) samples.
When working with statistical significance, people often make common mistakes. One of them is expecting too much certainty or any at all. Common pitfalls are:
Statistical significance is a measure of how likely it is that a result occurred by chance. It is not a measure of how big or important the results are.
Another common mistake is assuming a significant result is meaningful, which isn’t always the case. A significant result may be due to chance rather than being indicative of any real difference between the groups you’re comparing.
Another common mistake is assuming that a non-significant result means there’s no difference between the groups.
A non-significant result may be because the sample size needed to be bigger to detect a difference. Alternatively, the groups may differ, but the difference is not large enough to be statistically significant.
When testing multiple hypotheses, you need to adjust your significance level to account for the fact that you’re increasing your chances of getting a false positive result.
Statistical tests are just tools, and they have their limitations. Just because a result is statistically significant does not necessarily mean it’s true.
P-values are just one way of measuring statistical significance, and they’re not always the best measure. Other measurements, such as effect sizes, may be more informative in some cases.
If you keep these common mistakes in mind, you can avoid them when working with statistical significance.
Statistical significance can be tricky to calculate, but many resources are available. By doing various tests in the proper progression, you can eliminate or characterize a scenario quickly, saving time or suggesting a different course of action. With a little practice, you will be able to interpret the results of your data analysis correctly.
Do you want to discover previous research faster?
Do you share your research findings with others?
Do you analyze research data?
Last updated: 9 November 2024
Last updated: 11 January 2024
Last updated: 14 February 2024
Last updated: 27 January 2024
Last updated: 17 January 2024
Last updated: 14 November 2023
Last updated: 14 November 2023
Last updated: 20 January 2024
Last updated: 19 November 2023
Last updated: 5 February 2024
Last updated: 25 November 2024
Last updated: 25 November 2023
Last updated: 13 May 2024
Last updated: 25 November 2024
Last updated: 9 November 2024
Last updated: 13 May 2024
Last updated: 14 February 2024
Last updated: 5 February 2024
Last updated: 27 January 2024
Last updated: 20 January 2024
Last updated: 17 January 2024
Last updated: 11 January 2024
Last updated: 25 November 2023
Last updated: 19 November 2023
Last updated: 14 November 2023
Last updated: 14 November 2023
Get started for free
or
By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy