Skip to content

Unfinished (so far) draft blog posts

Most of the time when I start writing a blog post, I continue till its finished. As of this writing this blog has 7128 posts published, 137 scheduled, and only 434 unpublished drafts sitting in the folder.

434 might sound like a lot, but we’ve been blogging for over 10 years, and a bunch of those drafts never really got started.

Anyway, just for your amusement, I thought I’d share the titles of the draft posts, most of which are unfinished and probably will never be finished. They’re listed in reverse chronological order, and I’m omitting all the posts that I hadn’t bothered to title.

Here are the most recent few:

  • Of polls and prediction markets: More on #BrexitFail
  • Deep learning, model checking, AI, the no-homunculus principle, and the unitary nature of consciousness
  • “Simple, Scalable and Accurate Posterior Interval Estimation”
  • ESP and the Bork effect
  • Hey, PPNAS . . . this one is the fish that got away.
  • The new quantitative journalism
  • Trying to make some sense of it all, But I can see that it makes no sense at all . . . Stuck in a local maximum with you
  • Statistical significance, the replication crisis, and a principle from Bill James
  • Is retraction only for “the worst of the worst”?
  • Stan – The Bayesian Data Scientist’s Best Friend [this one’s from Aki]
  • How to think about a study that’s iffy but that’s not obviously crap
  • The identification trap
  • nisbett
  • The penumbra of shooting victims
  • What is the prior distribution for treatment effects in social psychology?
  • You can’t do Bayesian Inference for LDA! [by Bob]
  • Product vs. Research Code: The Tortoise and the Hare [another one from Bob; he was busy that week!]
  • Party like it’s 2005
  • The challenge of constructive criticism
  • We got mooks [This one I actually posted, and then one of my colleagues asked me to take it down because my message wasn’t 100% positive.]
  • Can’t Stop Won’t Stop Splittin
  • Some statistical lessons from the middle-aged-mortality-trends story
  • Humans Can Discriminate More than 1 Trillion Olfactory Stimuli. Not.
  • If I have not seen far, it’s cos I’m standing on the toes of midgets
  • Ovulation and clothing: More forking paths [this one was in the Zombies category and I think we’ve run enough posts on the topic]
  • How to get help with Stan [from Daniel. I don’t know why he didn’t post it.]
  • Running Stan [also from Daniel]
  • Stan taking over the world
  • Why is Common Core so crappy?
  • Attention-grabbing crap, statistics edition
  • Optimistic or pessimistic priors
  • I hate hate hate hate this graph. Not so much because it’s a terrible graph—which it is—but because it’s [Yup, that’s it. I guess I didn’t even finish the title of this one!]
  • Some more statistics quotes!
  • Show more of the time series
  • Just in case there was any confusion
  • “Steven Levitt from Freakonomics describes why he’s obsessed with golf” [Enough already on this guy. — ed.]
  • A statistical communication problem!
  • What should be in an intro stat course?
  • Postdoc opportunities to work with our research group!!
  • When you call me bayesian, I know I’m not the only one
  • The NIPS Experiment [from Bob]
  • Sociology comments
  • When is a knave also a fool?
  • Income Inequality: A Question of Velocity or Acceleration? [by David K. Park]
  • Economics now = Freudian psychology in the 1950s [I already posted on the topic, so this post must be some sort of old draft.]
  • “College Hilariously Defends Buying $219,000 Table”
  • Having a place to put my thoughts
  • ; vs |
  • The (useful) analogy between preregistration of a replication study and randomization in an experiment
  • It’s somewhat about the benjamins [Hey, I like that title!]
  • Intellectuals’ appreciation of old pop culture
  • Is it hype or is it real?
  • Scientific and scholarly disputes
  • Book by tobacco-shilling journalist given to Veterans Affairs employees
  • Alphabetism
  • I ain’t got no watch and you keep asking me what time it is

That takes us back to Oct 2014. Some of these are close to finished and maybe I’ll post soon; others are on topics we’ve already done to death; and some of the others, I have no idea what I was going to say. That last post above, I remember thinking of the idea when I was riding my bike and that Dylan song came on. When I got home, I wrote the title of the post but failed to put anything in the main text box, and now I completely forget my intentions. Too bad, it’s a good title.

P.S. I wrote the above a few months ago and I have a couple more drafts now in the pile.


  1. Shecky R says:

    Methinks you have too much free time on your hands Andrew; may need to request the University assign you more classes to teach… ;-)

  2. You did post “Hey PPNAS… this one is the fish that got away” (unless you have a different version or a Part 2):

    The title stood out in my memory.

    I hope some of these see the light of blog; they look like fun!

  3. Just want to mention that the humorous titles you use are appreciated.

  4. Dave says:

    It would be fun if you let us vote on which post to finish and publish.

    I’d vote for “Deep learning, model checking, AI, the no-homunculus principle, and the unitary nature of consciousness.”

  5. Chris J says:

    If there is anything new in “Statistical significance, the replication crisis, and a principle from Bill James” and we get a vote (realizing that AJG, like the electoral college, has the final say), I vote for that. Also, it’s too bad we have to lose all the good stuff that will never make it to publication for the reasons you explain. Ideally, you would have a page or site that is the “garbage pail” of half-completed thoughts and musings, not for attribution, etc., but ripe for the pickings. I am sure a lot of folks would rather pick through your garbage than, say, someone like Henry Kissinger. (HK was the first person put through that experience, at least based on my recollection.)

  6. Jose says:

    +1’s to neural networks and common core!

    Also, why would Bob say you can’t do Bayesian inference on LDA? (I assume it’s latent dirichlet allocation)?!

    • Andrew says:


      What Bob means is that LDA is fundamentally a discrete or multimodal problem, so any real Bayesian inference for LDA would require the combinatorial explosion of summing over all possibilities. Thus, any Bayesian computations for LDA are only approximate and can be best viewed as explorations around some range of possibilities near a starting point or point estimate.

  7. Hannes says:

    I’d love to see the “What should be in an intro stat course?”. I’ve several students (some real some cross industry) in the company who I’d really like to intro to this fascinating matter.

Leave a Reply