Greg Werbin points us to an online discussion of the following question:
Why continue to teach and use hypothesis testing (with all its difficult concepts and which are among the most statistical sins) for problems where there is an interval estimator (confidence, bootstrap, credibility or whatever)? What is the best explanation (if any) to be given to students? only tradition?
I won’t attempt to answer this question but I will comment on the replies. Notably to me, none of the replies said anything about controlling Type 1 error rates or anything else. Rather, the main defense of hypothesis testing were not defenses of hypothesis testing at all, but defenses of decision analysis.
This is interesting because in Bayesian inference, decision analysis comes automatically (I’d say “pretty much for free” but that’s not quite right because it can take effort to define a reasonable utility function. You could say that this is effort worth taking, and I’d pretty much agree with that, but it is effort.) so it doesn’t need a any special name. To do Bayesian decision analysis you don’t need any null and alternative hypotheses, you just lay out the costs and benefits and go from there.
But, for people with classical training, “hypothesis testing” is a thing. And I agree that, if all you have is interval estimation, you need to take some other step to get to decision analysis.
Some of the discussants to this post did discuss Bayesian inference so it’s not like I’m saying that my above thoughts represent some deep new idea. My point here is that I’ve typically taken hypothesis testing at face value (as a way of evaluating the evidence against a null hypothesis), but I suppose that for many people, hypothesis testing is the default statistical tool for decision analysis. Scary thoughts.