# Book Review: “What is a p-value anyway? 34 Stories to Help You Actually Understand Statistics” by Andrew J. Vickers

Review on Video:

When I was studying engineering, Statistics was not a required subject. It wasn’t until I started working that I appreciated the power of statistics. In this imprecise world, many things can be explained only in statistic terms like confidence level, insufficient sample size and etc. I got a full lesson of statistics as part of the my MBA degree curriculum. Ever after taking many more statistics-related class and going through a full Six Sigma Green Belt training, there are a few things about statistics that are still hard to grasp. It’s not just p-value that confused people, there are simply too many pitfalls when novices or even experts apply statistics to real life problems.

The author organizes 34 stories across 34 chapters. Since the author works in the medical field, he mentioned quite a few tidbits about how drugs were clinical tried. It’s a good book for beginners as well as people who used statistics regularly to watch out for its pitfalls. You might get a different perspective about statistics. I did.

My key takeaways – some refresher and something new:
1. Many things in life doesn’t follow normal distribution, especially the ones involve physical ability (pregnancy duration, body BMI). Sometimes log scale fits better.
2. Two sorts of variation: observable natural variations (reproducibility), variation of study results (repeatibility).
3. Statistical ties, e.g. in election poll, means that the confidence interval includes/overlaps with no difference.(Chap. 12)
4. P-values test hypotheses. (Chap. 13)
5. Statistics are mainly used for inference (test hypotheses) or prediction (extrapolation, interpolation).
6. Null hypotheses is a statement suggesting that nothing interesting is going on (status quo) that there is no difference between that observed data and what was expected or no difference between two groups. The P-value is the probability that the data would be at least was extreme as those observed if the null hypotheses were true.
6. T-test vs. Wilcoxon test (new to me). If the data is very skewed, use Wilcoxon test whose data must be converted to ranks first. (Chap. 16)
7. Precision (width of confidence interval) = variation/ sqrt(sample size). To reduce the confidence interval (enhance precision) by half, you’d need 4 times of the sample size – very expensive. To get the sample size for a specific test = (noise or variation / signal or confidence interval )^2.
8. “Adjust the results” can be applied to multi-variable regression to help with confounding (confusing). (Chap. 19).
9. Sensitivity is the probability of a positive diagnostic test given that you have the disease (true positive). Specificity is the probability that a negative diagnostic test given that you don’t have the disease (true negative). The most worrisome situations are when the test comes back positive if they indeed have the disease (positive predictive value) or when the test comes back negative and the patient is truly free of disease is the negative predictive value. (Chap. 20)
10. Don’t accept the null hypothesis. Instead say “we could not show a difference.” Don’t use a p-value of 0, say “P < 0.001" instead. 11. Some test methods, e.g. chi-squired and ANOVA, only provides P-value - no estimates. Correlation provides estimates but no inferences. 12. One common error is to calculate the probability of something that has already happened. Then come into conclusions about what caused it based on whether that probability is how or low. E.g. calculation of the odd OJ killed his wife. Instead, the question to ask is "if a woman has been murdered and has been previous beaten by her husband, what is the chance of he was the murderer." 13. Conditional probability depends on both the probability before the information was obtained (prior probability of a heart disease) and the value of he information (such the accuracy of the heart test). 14. The more statistical tests you conduct, the greater the chance that one will come up statistically significant, even if the null hypothesis is true. 15. A smaller study has a good chance of failing to reject the null hypothesis, even if it's false. Subgroup analysis increases both the rise of falsely rejecting the null hypothesis when it's true and falsely failing to reject the null hypothesis when it's false. 16. P-values measure strength of evidence, not size of an effect.
17. Don’t compare p-values.
18. Many statistical errors occur because of starting the clock at the wrong time.
19. Lead time bias. If you find a way to find the problem earlier, then the time between the problem and the end result will be longer.
20. Statistics is used to help scientists analyze data, but is itself a science.
21. Statistics should be about linking math to science: a. think through the science and develop statistical hypotheses in the light of specific question. b. interpret the results of the analysis in terms of their implications for those questions.
22. Statistics is about people even if you can’t see the tears. 