Skip to content
Archive of posts filed under the Decision Theory category.

“Richard Jarecki, Doctor Who Conquered Roulette, Dies at 86”

[relevant video] Thanatos Savehn is right. This obituary, written by someone named “Daniel Slotnik” (!), is just awesome: Many gamblers see roulette as a game of pure chance — a wheel is spun, a ball is released and winners and losers are determined by luck. Richard Jarecki refused to believe it was that simple. He […]

Trapped in the spam folder? Here’s what to do.

[Somewhat-relevant image] It seems that some people’s comments are getting trapped in the spam filter. Here’s how things go. The blog software triages the comments: 1. Most legitimate comments are automatically approved. You write the comment and it shows up right away. 2. Some comments are flagged as potentially spam. About half of these are […]

Response to Rafa: Why I don’t think ROC [receiver operating characteristic] works as a model for science

Someone pointed me to this post from a few years ago where Rafael Irizarry argues that scientific “pessimists” such as myself are, at least in some fields, “missing a critical point: that in practice, there is an inverse relationship between increasing rates of true discoveries and decreasing rates of false discoveries and that true discoveries […]

Don’t call it a bandit

Here’s why I don’t like the term “multi-armed bandit” to describe the exploration-exploitation tradeoff of inference and decision analysis. First, and less importantly, each slot machine (or “bandit”) only has one arm. Hence it’s many one-armed bandits, not one multi-armed bandit. Second, the basic strategy in these problems is to play on lots of machines […]

When LOO and other cross-validation approaches are valid

Introduction Zacco asked in Stan discourse whether leave-one-out (LOO) cross-validation is valid for phylogenetic models. He also referred to Dan’s excellent blog post which mentioned iid assumption. Instead of iid it would be better to talk about exchangeability assumption, but I (Aki) got a bit lost in my discourse answer (so don’t bother to go […]

“Seeding trials”: medical marketing disguised as science

Paul Alper points to this horrifying news article by Mary Chris Jaklevic, “how a medical device ‘seeding trial’ disguised marketing as science.” I’d never heard of “seeding trials” before. Here’s Jaklevic: As a new line of hip implants was about to be launched in 2000, a stunning email went out from the manufacturer’s marketing department. […]

How dumb do you have to be…

I (Phil) just read an article about Apple. Here’s the last sentence: “Apple has beaten earnings expectations in every quarter but one since March 2013.” [Note added a week later: on July 31 Apple reported earnings for the fiscal third quarter.  Earnings per share was $2.34 vs. the ‘consensus estimate’ of $2.18, according to Thomson Reuters.]  

Parsimonious principle vs integration over all uncertainties

tl;dr If you have bad models, bad priors or bad inference choose the simplest possible model. If you have good models, good priors, good inference, use the most elaborate model for predictions. To make interpretation easier you may use a smaller model with similar predictive performance as the most elaborate model. Merijn Mestdagh emailed me […]

From no-data to data: The awkward transition

I was going to write a post with the above title, but now I don’t remember what I was going to say!

Where that title came from

I could not think of a good title for this post. My first try was “An institutional model for the persistence of false belief, but I don’t think it’s helpful to describe scientific paradigms as ‘true’ or ‘false.’ Also, boo on cheap laughs at the expense of academia,” and later attempts were even worse. At […]

Data-based ways of getting a job

Bart Turczynski writes: I read the following blog with a lot of excitement: Then I reread it and paid attention to the graphs and models (which don’t seem to be actual models, but rather, well, lines.) The story makes sense, but the science part is questionable (or at least unclear.) Perhaps you’d like to have […]

The persistence of bad reporting and the reluctance of people to criticize it

Mark Palko pointed to a bit of puff-piece journalism on the tech entrepreneur Elon Musk that was so extreme that it read as a possible parody, and I wrote, “it could just be as simple as that [author Neil] Strauss decided that a pure puff piece would give him access to write a future Musk […]

On deck through the rest of the year

July: The Ponzi threshold and the Armstrong principle Flaws in stupid horrible algorithm revealed because it made numerical predictions PNAS forgets basic principles of game theory, thus dooming thousands of Bothans to the fate of Alderaan Tutorial: The practical application of complicated statistical methods to fill up the scientific literature with confusing and irrelevant analyses […]

The “Psychological Science Accelerator”: it’s probably a good idea but I’m still skeptical

Asher Meir points us to this post by Christie Aschwanden entitled, “Can Teamwork Solve One Of Psychology’s Biggest Problems?”, which begins: Psychologist Christopher Chartier admits to a case of “physics envy.” That field boasts numerous projects on which international research teams come together to tackle big questions. Just think of CERN’s Large Hadron Collider or […]

What is the role of statistics in a machine-learning world?

I just happened to come across this quote from Dan Simpson: When the signal-to-noise ratio is high, modern machine learning methods trounce classical statistical methods when it comes to prediction. The role of statistics in this case is really to boost the signal-to-noise ratio through the understanding of things like experimental design.

Ways of knowing in computer science and statistics

Brad Groff writes: Thought you might find this post by Ferenc Huszar interesting. Commentary on how we create knowledge in machine learning research and how we resolve benchmark results with (belated) theory. Key passage: You can think of “making a a deep learning method work on a dataset” as a statistical test. I would argue […]

Answering the question, What predictors are more important?, going beyond p-value thresholding and ranking

Daniel Kapitan writes: We are in the process of writing a paper on the outcome of cataract surgery. A (very rough!) draft can be found here, to provide you with some context:  https://www.overleaf.com/read/wvnwzjmrffmw. Using standard classification methods (Python sklearn, with synthetic oversampling to address the class imbalance), we are able to predict a poor outcome […]

Chasing the noise in industrial A/B testing: what to do when all the low-hanging fruit have been picked?

Commenting on this post on the “80% power” lie, Roger Bohn writes: The low power problem bugged me so much in the semiconductor industry that I wrote 2 papers about around 1995. Variability estimates come naturally from routine manufacturing statistics, which in semicon were tracked carefully because they are economically important. The sample size is […]

About that quasi-retracted study on the Mediterranean diet . . .

Some people asked me what I thought about this story. A reporter wrote to me about it last week, asking if it looked like fraud. Here’s my reply: Based on the description, there does not seem to be the implication of fraud. The editor’s report mentioned “protocol deviations, including the enrollment of participants who were […]

Forking paths come from choices in data processing and also from choices in analysis

Michael Wiebe writes: I’m a PhD student in economics at UBC. I’m trying to get a good understanding of the garden of forking paths, and I have some questions about your paper with Eric Loken. You describe the garden of forking paths as “researcher degrees of freedom without fishing” (#3), where the researcher only performs […]