On deck through the first half of 2018

Here’s what we got scheduled for ya:

  • I’m with Errol: On flypaper, photography, science, and storytelling
  • Politically extreme yet vital to the nation
  • How does probabilistic computation differ in physics and statistics?
  • “Each computer run would last 1,000-2,000 hours, and, because we didn’t really trust a program that ran so long, we ran it twice, and it verified that the results matched. I’m not sure I ever was present when a run finished.”
  • “However noble the goal, research findings should be reported accurately. Distortion of results often occurs not in the data presented but . . . in the abstract, discussion, secondary literature and press releases. Such distortion can lead to unsupported beliefs about what works for obesity treatment and prevention. Such unsupported beliefs may in turn adversely affect future research efforts and the decisions of lawmakers, clinicians and public health leaders.”
  • Nudge nudge, say no more
  • Alzheimer’s Mouse research on the Orient Express
  • Incentive to cheat
  • Why are these explanations so popular?
  • The retraction paradox: Once you retract, you implicitly have to defend all the many things you haven’t yet retracted
  • The puzzle: Why do scientists typically respond to legitimate scientific criticism in an angry, defensive, closed, non-scientific way? The answer: We’re trained to do this during the process of responding to peer review.
  • It’s not about getting a “statistically significant” result or proving your hypothesis; it’s about understanding variation
  • Hey, here’s a new reason for a journal to reject a paper: it’s “annoying” that it’s already on a preprint server
  • Statistical behavior at the end of the world: the effect of the publication crisis on U.S. research productivity
  • “The following needs to be an immutable law of journalism: when someone with no track record comes into a field claiming to be able to do a job many times better for a fraction of the cost, the burden of proof needs to shift quickly and decisively onto the one making the claim. The reporter simply has to assume the claim is false until substantial evidence is presented to the contrary.”
  • (What’s So Funny ‘Bout) Evidence, Policy, and Understanding
  • A lesson from the Charles Armstrong plagiarism scandal: Separation of the judicial and the executive functions
  • How to get a sense of Type M and type S errors in neonatology, where trials are often very small? Try fake-data simulation!
  • How smartly.io productized Bayesian revenue estimation with Stan
  • Big Data Needs Big Model
  • State-space modeling for poll aggregation . . . in Stan!
  • When the appeal of an exaggerated claim is greater than a prestige journal
  • Bayes, statistics, and reproducibility: My talk at Rutgers 4:30pm on Mon 29 Jan 2018
  • The Paper of My Enemy Has Been Retracted
  • Another bivariate dependence measure!
  • The multiverse in action!
  • Looking at all possible comparisons at once: It’s not “overfitting” if you put it in a multilevel model
  • Objects of the class “Verbal Behavior”
  • Education, maternity leave, and breastfeeding
  • Geoff Norman: Is science a special kind of storytelling?
  • The Anti-Bayesian Moment and Its Passing
  • “revision-female-named-hurricanes-are-most-likely-not-deadlier-than-male-hurricanes”
  • What to teach in a statistics course for journalists?
  • Snappy Titles: Deterministic claims increase the probability of getting a paper published in a psychology journal
  • p=0.24: “Modest improvements” if you want to believe it, a null finding if you don’t.
  • N=1 experiments and multilevel models
  • Methodological terrorism. For reals. (How to deal with “what we don’t know” in missing-data imputation.)
  • 354 possible control groups; what to do?
  • I’m skeptical of the claims made in this paper
  • Return of the Klam
  • 3 quick tricks to get into the data science/analytics field
  • One data pattern, many interpretations
  • Use multilevel modeling to correct for the “winner’s curse” arising from selection of successful experimental results
  • How to think about the risks from low doses of radon
  • “Write No Matter What” . . . about what?
  • Of rabbits and cannons
  • Low power and the replication crisis: What have we learned since 2004 (or 1984, or 1964)?
  • Let’s face it, I know nothing about spies.
  • The graphs tell the story. Now I want to fit this story into a bigger framework so it all makes sense again.
  • “Deeper into democracy: the legitimacy of challenging Brexit’s majoritarian mandate”
  • “Have Smartphones Destroyed a Generation?” and “The Narcissism Epidemic”: How can we think about the evidence?
  • “If I wanted to graduate in three years, I’d just get a sociology degree.”
  • Rasmussen and Williams never said that Gaussian processes resolve the problem of overfitting
  • Big Oregano strikes again
  • Bayes for estimating a small effect in the context of large variation
  • What prior to use for item-response parameters?
  • Concerns about Brian Wansink’s claims and research methods have been known for years
  • The “shy Trump voter” meta-question: Why is an erroneous theory so popular?
  • Who’s afraid of prediction markets? (Hanson vs. Thicke)
  • “Like a harbor clotted with sunken vessels”
  • Information flows both ways (Martian conspiracy theory edition)
  • Another reason not to believe the Electoral Integrity Project
  • “and, indeed, that my study is consistent with X having a negative effect on Y.”
  • Incorporating Bayes factor into my understanding of scientific information and the replication crisis
  • Murray Davis on learning from stories
  • 3 cool tricks about constituency service (Daniel O’Donnell and Nick O’Neill edition)
  • A more formal take on the multiverse
  • Classical hypothesis testing is really really hard
  • What We Talk About When We Talk About Bias
  • What are the odds of Trump’s winning in 2020?
  • Garden of forking paths – poker analogy
  • The New England Journal of Medicine wants you to “identify a novel clinical finding”
  • Statistical controversy over “trophy wives”
  • Last lines of George V. Higgins
  • “It’s not just that the emperor has no clothes, it’s more like the emperor has been standing in the public square for fifteen years screaming, I’m naked! I’m naked! Look at me! And the scientific establishment is like, Wow, what a beautiful outfit.”
  • Forking paths said to be a concern in evaluating stock-market trading strategies
  • An economist wrote in, asking why it would make sense to fit Bayesian hierarchical models instead of frequentist random effects.
  • Debate over claims of importance of spending on Obamacare advertising
  • Spatial patterns in crime: Where’s he gonna strike next?
  • The problem with those studies that claim large and consistent effects from small and irrelevant inputs
  • Combining Bayesian inferences from many fitted models
  • Yet another IRB horror story
  • Hey! Free money!
  • Judgment Under Uncertainty: Heuristics and Biases
  • Perspectives on Psychological Science Article Investigated Over Fixing Suspicions
  • This one’s important: How to better analyze cancer drug trials using multilevel models.
  • Does adding women to corporate boards increase stock price?
  • A potential big problem with placebo tests in econometrics: they’re subject to the “difference between significant and non-significant is not itself statistically significant” issue
  • Yes on design analysis, No on “power,” No on sample size calculations
  • “Imaginary gardens with real data”
  • Learn by experimenting!
  • A possible defense of cargo cult science?
  • Don’t define reproducibility based on p-values
  • Fitting a hierarchical model without losing control
  • Failure of failure to replicate
  • Tools for detecting junk science? Transparency is the key.
  • It’s all about Hurricane Andrew: Do patterns in post-disaster donations demonstrate egotism?
  • “Bit by Bit: Social Research in the Digital Age”’
  • Fixing the reproducibility crisis: Openness, Increasing sample size, and Preregistration ARE NOT ENUF!!!!
  • Taking perspective on perspective taking
  • An Upbeat Mood May Boost Your Paper’s Publicity
  • Using partial pooling when preparing data for machine learning applications
  • Trichotomous
  • Bayesian inference and religious belief
  • What is “blogging”? Is it different from “writing”?
  • There’s nothing embarrassing about self-citation
  • The cargo cult continues
  • A few words on a few words on Twitter’s 280 experiment.
  • A quick rule of thumb is that when someone seems to be acting like a jerk, an economist will defend the behavior as being the essence of morality, but when someone seems to be doing something nice, an economist will raise the bar and argue that he’s not being nice at all.
  • The syllogism that ate social science
  • Early p-hacking investments substantially boost adult publication record
  • Economic growth -> healthy kids?
  • A coding problem in the classic study, Nature and Origins of Mass Opinion
  • A model for scientific research programmes that include both “exploratory phenomenon-driven research” and “theory-testing science”
  • Anthony West’s literary essays
  • Walter Benjamin on storytelling
  • Doomsday! Problems with interpreting a confidence interval when there is no evidence for the assumed sampling model
  • How to read (in quantitative social science). And by implication, how to write.
  • Klam > Ferris
  • Why is the replication crisis centered on social psychology?
  • What killed alchemy?
  • Watch out for naively (because implicitly based on flat-prior) Bayesian statements based on classical confidence intervals!
  • Evaluating Sigmund Freud: Should we compare him to biologists or economists
  • Slow to update
  • “16 and Pregnant”
  • Could you say that again less clearly, please? A general-purpose data garbler for applications requiring confidentiality
  • Mouse Among the Cats
  • How to reduce Type M errors in exploratory research?
  • “Eureka bias”: When you think you made a discovery and then you don’t want to give it up, even if it turns out you interpreted your data wrong
  • Graphs and tables, tables and graphs
  • Awesome data visualization tool for brain research
  • Prior distributions and the Australia principle
  • The fallacy of objective measurement: the case of gaydar
  • A Bayesian take on ballot order effects
  • What is “weight of evidence” in bureaucratese?
  • The anthropic principle in statistics
  • “I admire the authors for simply admitting they made an error and stating clearly and without equivocation that their original conclusions were not substantiated.”
  • Write your congressmember to require researchers to publicly post their code?
  • “And when you did you weren’t much use, you didn’t even know what a peptide was”
  • Three-warned is three-armed
  • Kaleidoscope
  • Some experiments are just too noisy to tell us much of anything at all: Political science edition
  • “Not statistically significant” != 0, stents edition
  • All Fools in a Circle
  • In which I demonstrate my ignorance of world literature
  • A link between science and hype? Not always!
  • “This is a weakness of our Bayesian Data Analysis book: We don’t have a lot of examples with informative priors.”
  • Against Screening
  • Ambiguities with the supposed non-replication of ego depletion
  • Forking paths come from choices in data processing and also from choices in analysis
  • Average predictive comparisons and the All Else Equal fallacy
  • “Human life is unlimited – but short”
  • Oxycontin, Purdue Pharma, the Sackler family, and the FDA
  • Wolfram Markdown, also called Computational Essay
  • The necessity—and the difficulty—of admitting failure in research and clinical practice
  • “My advisor and I disagree on how we should carry out repeated cross-validation. We would love to have a third expert opinion…”
  • A style of argument can be effective in an intellectual backwater but fail in the big leagues—but maybe it’s a good thing to have these different research communities
  • Olivia Goldhill and Jesse Singal report on the Implicit Association Test
  • Do women want more children than they end up having?
  • When “nudge” doesn’t work: Medication Reminders to Outcomes After Myocardial Infarction
  • Chasing the noise in industrial A/B testing: what to do when all the low-hanging fruit have been picked?
  • Bayesians are frequentists
  • Power analysis and NIH-style statistical practice: What’s the implicit model?
  • Using statistical ideas and methods to adjust drug doses
  • Carol Nickerson explains what those mysterious diagrams were saying
  • Answering the question, What predictors are more important?, going beyond p-value thresholding and ranking
  • Ways of knowing in computer science and statistics
  • Trying to make some sense of it all, but I can see it makes no sense at all . . . stuck in the middle with you
  • Regression to the mean continues to confuse people and lead to errors in published research
  • Multilevel modeling in Stan improves goodness of fit — literally.
  • The “Psychological Science Accelerator”: it’s probably a good idea but I’m still skeptical
  • Back to the Wall
  • He has a math/science background and wants to transition to social science. Should he get a statistics degree and do social science from there, or should he get a graduate degree in social science or policy?
  • Francis Spufford writes just like our very own Dan Simpson and he also knows about the Australia paradox!
  • Problems with surrogate markers
  • Vigorous data-handling tied to publication in top journals among public heath researchers
  • The Ponzi threshold and the Armstrong principle
  • Flaws in stupid horrible algorithm revealed because it made numerical predictions
  • PNAS forgets basic principles of game theory, thus dooming thousands of Bothans to the fate of Alderaan

Enjoy.

7 thoughts on “On deck through the first half of 2018

  1. I was surprised to find myself actually reading through this entire list and enjoying it. Sort of interesting trying to guess the thrusts of some of the posts. And this is great: “A quick rule of thumb is that when someone seems to be acting like a jerk, an economist will defend the behavior as being the essence of morality, but when someone seems to be doing something nice, an economist will raise the bar and argue that he’s not being nice at all.”

  2. If I may say the great value of this blog is to people who studied a few statistics subjects at University. Rather than their understanding atrophying it remains spick and span so to speak.

    Please keep it up and congratulations.

Leave a Reply to Z Cancel reply

Your email address will not be published. Required fields are marked *