Skip to content

Stewart Lee vs. Jane Austen; Dick advances

Yesterday‘s deciding arguments came from Horselover himself.

As quoted by Dalton:

Any given man sees only a tiny portion of the total truth, and very often, in fact almost . . . perpetually, he deliberately deceives himself about that precious little fragment as well.

And:

We ourselves are information-rich; information enters us, is processed and is then projected outwards once more, now in an altered form. We are not aware that we are doing this, that in fact this is all we are doing.

Wow—Turing-esque (but I can’t picture Dick running around the house).

And, as quoted by X:

“But—let me tell you my cat joke. It’s very short and simple. A hostess is giving a dinner party and she’s got a lovely five-pound T-bone steak sitting on the sideboard in the kitchen waiting to be cooked while she chats with the guests in the living room—has a few drinks and whatnot. But then she excuses herself to go into the kitchen to cook the steak—and it’s gone. And there’s the family cat, in the corner, sedately washing it’s face.”

“The cat got the steak,” Barney said.

“Did it? The guests are called in; they argue about it. The steak is gone, all five pounds of it; there sits the cat, looking well-fed and cheerful. “Weigh the cat,” someone says. They’ve had a few drinks; it looks like a good idea. So they go into the bathroom and weigh the cat on the scales. It reads exactly five pounds. They all perceive this reading and a guest says, “okay, that’s it. There’s the steak.” They’re satisfied that they know what happened, now; they’ve got empirical proof. Then a qualm comes to one of them and he says, puzzled, “But where’s the cat?”

Fat wins the thread.

Today’s contest matches up two surprisingly strong unseeded speaker candidates. Jane Austen cuts to the bone, but with discretion; Stewart Lee lets it all hang out. So how do we like our social observations: subtle, or like a refrigerator to the side of the head?

P.S. As always, here’s the background, and here are the rules.

The publication of one of my pet ideas: Simulation-efficient shortest probability intervals

In a paper to appear in Statistics and Computing, Ying Liu, Tian Zheng, and I write:

Bayesian highest posterior density (HPD) intervals can be estimated directly from simulations via empirical shortest intervals. Unfortunately, these can be noisy (that is, have a high Monte Carlo error). We derive an optimal weighting strategy using bootstrap and quadratic programming to obtain a more computation- ally stable HPD, or in general, shortest probability interval (Spin). We prove the consistency of our method. Simulation studies on a range of theoretical and real-data examples, some with symmetric and some with asymmetric posterior densities, show that intervals constructed using Spin have better coverage (relative to the posterior distribution) and lower Monte Carlo error than empirical shortest intervals. We implement the new method in an R package (SPIn) so it can be routinely used in post-processing of Bayesian simulations.

This is one of my pet ideas but it took a long time to get it working. I have to admit I’m still not thrilled with the particular method we’re using—it works well on a whole bunch of examples, but the algorithm itself is a bit clunky. I have a strong intuition that there’s a much cleaner version that does just as well while preserving the basic idea, which is to get a stable estimate of the shortest interval at any given probability level (for example, 0.95) given a bunch of posterior simulations. Once we have this cleaner algorithm, we’ll stick it into Stan, as there are lots of examples (starting with the hierarchical variance parameter in the famous 8-schools model) where a highest probability density interval (or, equivalently, shortest probability interval) makes a lot more sense than the usual central interval.

Mohandas Gandhi (1) vs. Philip K. Dick (2); Hobbes advances

All of yesterday‘s best comments were in favor of the political philosopher. Adam writes:

With Hobbes, the seminar would be “nasty, brutish, and short.” And it would degenerate into a “war of all against all.” In other words, the perfect academic seminar.

And Jonathan writes:

Chris Rock would definitely be more entertaining. But the chance to see a speaker who knew Galileo, basing his scientific worldview on him, and could actually find weak points in the proofs of the best mathematicians of the day (even if he couldn’t do any competent math himself) should not be squandered. . . .

I love Chris Rock, but you can see him on HBO. Let Hobbes have the last word against Wallis.

Also, Hobbes could talk about the implications of bullet control.

And, now, both of today’s contestants have a lot to talk about, and they’re both interested in the real world that underlies what we merely think is real. Gandhi was a vegetarian, but Dick was a cat person, which from my perspective is even better. Which of these two culture heroes is ready for prime time??

P.S. As always, here’s the background, and here are the rules.

Imagining p<.05 triggers increased publication

We’ve all had that experience of going purposefully from one hypothesis to another, only to get there and forget why we made the journey. Four years ago, researcher Daryl Bem and his colleagues stripped this effect down, showing that the simple act of obtaining a statistically significant comparison induces publication in a top journal. Now statisticians at Columbia University, USA, have taken things further, demonstrating that merely imagining a statistically significant p-value is enough to trigger increased publication. . . .

The new study shows that this event division effect can occur in our imagination and doesn’t require literally seeing a pattern that reflects the general population. . . .

OK, I guess at this point you’ll want to see the original, a news article called “Imagining walking through a doorway triggers increased forgetting,” by Christian Jarrett in the British Psychological Society Research Digest:

We’ve all had that experience of going purposefully from one room to another, only to get there and forget why we made the journey. Four years ago, researcher Gabriel Radvansky and his colleagues stripped this effect down, showing that the simple act of passing through a doorway induces forgetting. Now psychologists at Knox College, USA, have taken things further, demonstrating that merely imagining walking through a doorway is enough to trigger increased forgetfulness. . . .

The new study shows that this event division effect can occur in our imagination and doesn’t require literally seeing a doorway and passing through it. . . .

Yes, I do find this funny. But, at the same time, I recognize that these are not easy questions. And, in particular, Jarrett is in a difficult position in that to some extent his job involves the promotion of psychology research, not just the evaluation.

I sometimes have a similar problem when blogging for the Monkey Cage political science blog. A bit of criticism of political science research is OK, but too much and I get pushback.

So, back to the research in question, Lawrence, Z., & Peterson, D. (2014). Mentally walking through doorways causes forgetting: The location updating effect and imagination Memory, 1-9 DOI: 10.1080/09658211.2014.980429.

Based on Jarrett’s description, I see a lot of red flags:

1. Lack of face validity. “Mentally walking through doorways causes forgetting”?? According to Jarrett, “The group who’d imagined passing through a doorway performed worse at the task than the first group who didn’t have to go through a doorway.” This could be true—all things are possible—but it sounds a little weird. And the researchers themselves seem to agree with me on this; see next point.

2. Claims that the effect is both expected and surprising. On one hand, “This effect of an imagined spatial boundary on forgetting is consistent with a related line of research that’s shown forgetting increases after temporal or other boundaries are described in narrative text.” On the other hand, the researchers write, “That walking through a doorway elicits forgetting is surprising because it is such a subtle perceptual feature . . . that simply imagining such a walk yields a similar result is even more surprising . . .”

This is what, following up on some observations from Jeremy Freese, we’ve called the scientific surprise two-step.

3. Lots of different small-n studies but no preregistered replications that I see. Lawrence and Peterson’s finding follows up on a paper by a couple of other researchers, four years ago, which, according to Jarrett, “shows that the simple act of walking through a doorway creates a new memory episode.” The recent paper and that earlier paper have a bunch of studies and comparisons, but it seems like a bit of a ramble (or what Freese calls “Columbian Inquiry”). Each time something interesting shows up, the researchers follow up with a new study that is evaluated in its own way with various idiosyncratic data-analysis choices.

Put it all together, and all I can say is: I’m not convinced. I’m not saying I’m sure these claims are wrong; I just think they’re pretty much at the same status as Nosek, Spies, Motyl’s “50 shades of gray” findings:

Participants from the political left, right and center (N = 1,979) completed a perceptual judgment task in which words were presented in different shades of gray. Participants had to click along a gradient representing grays from near black to near white to select a shade that matched the shade of the word. We calculated accuracy: How close to the actual shade did participants get? The results were stunning. Moderates perceived the shades of gray more accurately than extremists on the left and right (p = .01)

That is, before Nosek et al. tried their own preregistered replication:

We could not justify skipping replication on the grounds of feasibility or resource constraints. . . . We conducted a direct replication while we prepared the manuscript. We ran 1,300 participants, giving us .995 power to detect an effect of the original effect size at alpha = .05.

And got this:

The effect vanished (p = .59).

P.S. Just to be clear, I’m not trying to pick on Christian Jarrett. It’s not his job to evaluate the strength of claims that have been published in refereed psychology journals. We all just need to be aware that you can’t believe everything you see in the papers.

Chris Rock (3) vs. Thomas Hobbes; Wood advances

In yesterday‘s contest, there’s no doubt in my mind that Levi-Strauss would give a better and more interesting talk than Wood, whose lecture would presumably feature non-sequiturs, solecisms, continuity violations, and the like.

But the funniest comment was from Jonathan:

Ed Wood on Forecasting:

“We are all interested in the future for that is where you and I are going to spend the rest of our lives.” Plan 9 From Outer Space: The Original Uncensored And Uncut Screenplay

Ed Wood on Bayesian vs. Frequentist:

“One is always considered *mad* if one discovers something that others cannot grasp!” Bride of the monster

These quotes are great! I still don’t see Wood getting into the Final Four, but he earned this one, dammit.

And now we have a struggle of two worthy opponents.

Hobbes got past Larry David in round 1, he destroyed Leo Tolstoy in round 2, and now he’s up against another comedian. Does the Leviathan have it in him to advance to the next round, and, from there, likely to the Final Four? It’s up to you to provide the killer arguments, one way or another.

P.S. As always, here’s the background, and here are the rules.

Another disgraced primatologist . . . this time featuring “sympathetic dentists”

Evilicious-Cover

Shravan Vasishth points us to this news item from Luke Harding, “History of modern man unravels as German scholar is exposed as fraud”:

Other details of the professor’s life also appeared to crumble under scrutiny. Before he disappeared from the university’s campus last year, Prof Protsch told his students he had examined Hitler’s and Eva Braun’s bones.

He also boasted of having flats in New York, Florida and California, where, he claimed, he hung out with Arnold Schwarzenegger and Steffi Graf. . . . some of the 12,000 skeletons stored in the department’s “bone cellar” were missing their heads, apparently sold to friends of the professor in the US and sympathetic dentists.

To paraphrase a great scholar:

His resignation is a serious loss for Frankfurt University, and given the nature of the attack on him, for science generally.

I’ve heard he’s going to devote himself to work with at-risk youths.

Claude Levi-Strauss (4) vs. Ed Wood (3); Cervantes wins

For yesterday we have a tough call, having to decide between two much-loved philosophical writers, as Jonathan put it in comments:

Camus on ramdomness; how make a model when there is no signal — only noise.
Cervantes on making the world fit the model through self-delusion.

Two fascinating statistics lectures with the same underlying theme — modelmaking as a chimera: “a horrible or unreal creature of the imagination.”

And, as Zbicyclist writes:

Both are oddly relevant at a time when Ebola threatens and when wind power is making a comeback.

Z almost won it with this comment:

Cervantes would be chivalrous and prompt. Camus would need to take a cigarette break every 5 minutes, that or he’d set off the sprinkler system.

But we’ve already used the cigarette thing, also it’s not so clear that chivalry is a good attribute in a seminar talk.

I’ll go with this quote supplied by Matt:

“The most difficult character in comedy is that of the fool, and he must be no simpleton that plays that part.” -Miguel De Cervantes

And today we have a battle of the dark horses: the versatile sociologist vs. the moviemaker who we laugh at, not with. I don’t see either of them making it past Chris Rock or Thomas Hobbes, but we gotta declare a winner.

P.S. As always, here’s the background, and here are the rules.

Define first, prove later

This post by John Cook features a quote form a book “Calculus on Manifolds,” by Michael Spivak which I think was the textbook for a course I took in college where we learned how to prove Stokes’s theorem, which is something in multivariable calculus involving the divergence and that thing that you get where you turn your hand around and see which way your thumb is pointing, you know, that thing you do to figure out which way the magnetic field goes—the “curl,” maybe??

Here’s the quote from Spivak (as quoted by Cook):

. . . the proof of [Stokes’] theorem is, in the mathematician’s sense, an utter triviality — a straight-forward calculation. On the other hand, even the statement of this triviality cannot be understood without a horde of definitions . . .There are good reasons why the theorems should all be easy and the definitions hard. . . .

Cook places this within a thoughtful discussion of the tradeoff between putting complexity in the definition or in the proof, or, in a computing context, putting complexity in the programming language or in the program itself. To port this to statistics, we might talk about putting complexity in the statistical formalism or in the application. Bayesian statistics, for example, has a complicated formalism but is direct to apply; whereas classical statistical methods are simple—closer to “button-pushing”—but a lot of choice goes into which buttons to push.

Anyway, back to Spivak. I hated the course based on his book. Even though the prof was wonderful—he was my favorite math professor in college, I want up to him after the class was over and asked him to be my advisor—and the textbook itself was super-clear. But the course made me miserable. We started off the semester with a bunch of completely mysterious definitions, continued with weeks and weeks of lemmas that made no sense (even though I could follow each step), and concluded on the last day with the theorem, at which point I’d completely lost the thread.

It was only a bit later, after I happened to come across Proofs and Refutations, Imre Lakatos’s classic reconstruction of an episode in the history of mathematics, when I realized that the professor, and the textbook, did it backwards.

The right way to teach Stokes’s theorem (at least for me) would be to start by proving the theorem—it indeed is straightforward enough that a so-called heuristic proof could be laid out clearly in a single class period—and then step back and ask: what conditions are necessary and sufficient for the theorem to be correct? Or, to put it another way: under what conditions is the theorem false?

Step 1: The proof. (first week of class)
Step 2: The counterexamples. (second week of class)
Step 3: Going backward from there, establishing the conditions for the theorem, that is, the definition, in whatever rigor is required (the remaining 11 weeks of class).

That’s how they should’ve done it.

Miguel de Cervantes (2) vs. Albert Camus (1); Twain wins

Yesterday‘s winner is Mark Twain because, as Anonymous demonstrated in the comments, Twain on Eddy is more interesting than Eddy on Eddy.

Today’s third-round match pits an eternal classic vs. the coolest of the cool.

P.S. As always, here’s the background, and here are the rules.

Adiabatic as I wanna be: Or, how is a chess rating like classical economics?

glicko

Chess ratings are all about change. Did your rating go up, did it go down, have you reached 2000, who’s hot, who’s not, and so on. If nobody’s abilities were changing, chess ratings would be boring, they’d be nothing but a noisy measure, and watching your rating change would be as exciting as watching a graph of mildly integrated white noise.

Ratings changes are interesting because the signal is interesting: players are getting better or worse.

But the standard (Elo) theory of ratings is, implicitly, based on the assumption that individual abilities are constant.

So, there you have it: a method whose main purpose is to study change, is based on a static model.

This problem has been known for a long time, indeed my grad-school friend Mark Glickman worked on it (PhD thesis: “Paired Comparison Models with Time Varying Parameters”; follow-up papers, “A Comprehensive Guide to Chess Ratings,” “A State-Space Model for National Football League Scores,” and “Dynamic paired comparison models with stochastic variances”). Mark is a chess master, a magician, and a master musician, and he generalized the multiple comparisons model that underlies chess ratings, to allow for time changes in player abilities.

Economic theory has a similar story. Economic transactions represent local disequilibria: Person A sells object X to person B at price Y because A and B have different resources and preferences; once the object is sold, under the usual theory it will not be sold back. In that sense, economic transactions go “downhill,” and the economy would grind to a halt if new “energy” were not added into the system in the form of individuals moving, growing, being born and dying, and sor forth.

This point is not new—I’m not claiming any special insight into economics here, nor am I claiming this is some sort of bold criticism of economic theory. It’s well known that classical economics is an equilibrium theory and is thus only approximate, partly because (of course) the world is never in equilibrium, but partly because if the world ever could be in equilibrium, economics would become largely irrelevant. Like the Elo rating system, classical economics is an equilibrium model that is of interest because the world is not in equilibrium.

And, as with the chess ratings, people realize this. Again, I’m claiming no special insight here.

I just wanted to point out this interesting feature of methods that are used to study change but are based on static models. In a sense, it’s impressive how effective a static model can be in such settings, even while it’s clear that we should be able to do better with models that explicitly incorporate nonstationarity.