I’m on an island in Maine for a few weeks (big shout out for North Haven!) This morning I picked up a copy of “Working Waterfront,” a newspaper that focuses on issues of coastal fishing communities. I came across an article about modeling “fish” populations — actually lobsters, I guess they’re considered “fish” for regulatory purposes. When I read it, I thought “wow, this article is really well-written, not dumbed down like articles in most newspapers.” I think it’s great that a small coastal newspaper carries reporting like this. (The online version has a few things that I don’t recall in the print version, too, so it’s even better). But in addition to being struck by finding such a good article in a small newspaper, I was struck by this:
According to [University of Maine scientist Yong] Chen, there are four main areas where his model improved on the prior version. “We included the inshore trawl data from Maine and other state surveys, in addition to federal survey data; we had better catch data to work with than before; we had more realistic biology built into our virtual lobsters; and we used a statistical approach that incorporates margins of error in our inputs (this approach uses Bayesian statistics),” he said.
The phrase “virtual lobsters” is kinda nice, I think. But check out the seemingly gratuitous mention of Bayesian statistics. There’s just no way the intended audience for this article is expected to know what Bayesian statistics is — unlike “v-notching protection”, which is mentioned elsewhere but the article doesn’t bother to explain because, hey, everybody knows what v-notching protection is.
I’m not sure why Bayesian statistics is mentioned here. Just to throw in some jargon in order to sound sophisticated? Or is there some sense that people won’t know what Bayesian statistics is, but maybe they’ve heard that it’s a good thing?