Skip to content
 

An economist offers a theoretical model explaining “story time”: that point in social scientists’ papers where they pivot from a well-founded but narrow claim to a broad conclusion that is unsupported by theory or data

Ole Rogeberg writes:

Here`s a blogpost regarding a new paper (embellished with video and an essay) where a colleague and I try to come up with an explanation for why the discipline of economics ends up generating weird claims such as those you`ve blogged on previously regarding rational addiction.

From Ole’s blog:

The puzzle that we try to explain is this frequent disconnect between high-quality, sophisticated work in some dimensions, and almost incompetently argued claims about the real world on the other. . . .

Our explanation can be put in terms of the research process as an “evolutionary” process: Hunches and ideas are turned into models and arguments and papers, and these are “attacked” by colleagues who read drafts, attend seminars, perform anonymous peer-reviews or respond to published articles. Those claims that survive this process are seen as “solid” and “backed by research.” If the “challenges” facing some types of claims are systematically weaker than those facing other types of claims, the consequence would be exactly what we see: Some types of “accepted” claims would be of high standard (e.g., formal, theoretical models and certain types of statistical fitting) while other types of “accepted claims” would be of systematically lower quality (e.g., claims about how the real world actually works or what policies people would actually be better off under).

In our paper, we pursue this line of thought by identifying four types of claims that are commonly made – but that require very different types of evidence (just as the Pythagorean theorem and a claim about the permeability of shale rock would be supported in very different ways). We then apply this to the literature on rational addiction and argue that this literature has extended theory and that, to some extent, it is “as if” the market data was generated by these models. However, we also argue that there is (as good as) no evidence that these models capture the actual mechanism underlying an addiction or that they are credible, valid tools for predicting consumer welfare under addictions. All the same – these claims have been made too – and we argue that such claims are allowed to piggy-back on the former claims provided these have been validly supported. We then discuss a survey mailed to all published rational addiction researchers which provides indicative support – or at least is consistent with – the claim that the “culture” of economics knows the relevant criteria for evaluating claims of pure theory and statistical fit better than it knows the relevant criteria for evaluating claims of causal or welfare “insight”. . . .

If this explanation holds up after further challenges and research and refinement, it would also provide a way of changing things – simply by demanding that researchers state claims more explicitly and with greater precision, and that we start discussing different claims separately and using the evidence relevant to each specific one. Unsupported claims about the real world should not be something you`re allowed to tag on at the end of a work as a treat for competently having done something quite unrelated.

Or, as Kaiser Fung puts it, “story time.” (For a recent example, see the background behind the claim that “a raise won’t make you work harder.”)

This (Ole’s idea) is just great: moving from criticism to a model and pointing the way forward to possible improvement.

7 Comments

  1. Alex Davis says:

    Reminds me of the late Robyn Dawes:

    A message from psychologists to economists: mere predictability doesn't matter like it should (without a good story appended to it)

    http://scholar.google.com/scholar?cluster=1553667

  2. A. Zarkov says:

    Only 11 economists predicted the housing crisis and the great recession. They are:

    Dean Baker, US
    Wynne Godley, US
    Fred Harrison, UK
    Michael Hudson, US
    Eric Janszen, US
    Stephen Keen, Australia
    Jakob Brøchner Madsen & Jens Kjaer Sørensen (grad student), Denmark
    Kurt Richebächer, US
    Nouriel Roubini, US
    Peter Schiff, US
    US Robert Shiller, US

    See here for details. The criteria for admission to the list are strict to avoid the "broken clock right twice a day" problem. In particular Steve Keen predicted the problem quite far back. See his commentary on the list here.

    Note Krugman and DeLong do not appear on the list and they should not as they didn't satisfy the criteria. Krugman was nervous about housing prices but he did not connect everything up the way the people appearing on the list did.

    Should we really take the community of economists seriously when out of the some 12,000 professional economists in the world only 11 presented a cogent theory predicting the most serious banking panic and recession since the Great Depression? In other words, one has to wonder if economists do any "high quality work" at all.

  3. Giles Warrack says:

    Wynne Godley was British. He died in May, 2010

  4. Carina says:

    I am Carina and one master student in Geography, University of Waterloo, Canada. Right now I am doing my master thesis on multilevel analysis of neighbourhood crime rates and social contexts in six Canadian cities. As you have experiences in multilevel modelling, could you help me figure out some problems I face in my research?

    The main purpose of my present research is to investigate whether contextual characteristics at the neighbourhood and city levels each make an independent contribution to a neighbourhood?s crime rate after controlling for the spatial dependence of crime across neighbourhoods. Due to data availability, I only have crime data aggregated to the census tract level for six Canadian cities.

    Since the data consist of a two-level hierarchy with neighbourhoods nested within cities, a set of two-level hierarchical linear models were developed which involved neighbourhoods (census tracts) as the micro level of analysis and cities as the second level of analysis. The dependent variable is the violent/property crime rate at the neighbourhood level (census tract). The independent variables are census variables measured at both neighbourhood and city level.

    Right now I have two questions:

    1) I only have data for six cities at present, while have 1479 neighbourhoods at the first-level (ranges from 50-500 neighbourhoods per city). It seems that the sample size for the level-2 is too small. I have run the models by using HLM 7.0 software and got some result. But I am worried that the results would be valid with such a small sample size. Do you think six cities are sufficient for a multilelvel modelling?

    2) Adjacent neighbourhoods are interdependent in space (neighbourhoods are subject to influence from nearby neighbourhoods), which violates the assumption of HLM on the independently distributed level-1 error term. Therefore, I also want to take such spatial dependence into account by using spatial regression models. For this purpose, a spatial lag variable (Wy) was created capturing the impacts of the dependent variable (crime rate)in neighboring areas. A spatial lag is the weighted average of crime rate in neighbouring locations. I included this spatial lag of dependent variable at level-1 equation along with other neighbourhood-level predictors to assess whether the spatial dependence of crime rate across neighbourhoods can make a effect. However, I noticed that the level-1 equation in this case has an endogenous variable WY on the right-hand side, I am not sure whether it can still be estimated by the maximum likelihood (ML) method which is implemented in HLM 7.0 by default.

    Could you help me to figure out it? Many thanks in advance.

  5. A. Zarkov says:

    Obviously a typo as his academic institution is identified. Not my list. Contact Dirk Bezemer http://www.rug.nl/staff/d.j.bezemer/index, and I am sure he will appreciate the correction.

  6. Andrew [not Gelman] says:

    I agree with Zarkov. Also, there are 661,400 M.D.s in the US and 0 of them have found the cure for cancer. Should we really take the community of US M.D.s seriously? What? Curing cancer is harder than predicting the housing collapse? Well, there are 55 times more MDs than economists!

  7. A. Zarkov says:

    I don't expect economists to find a cure for the current Great Recession, let alone all the possible recessions we could experience. I do expect that economists should have been able to recognize the signs and symptoms of an incipient crisis. Case in point. Last year my cat's vet identified small lesion on his lung through a routine X-Ray. With a resection of one of his lung lobes the vets were able to determine that he had a primary neoplasm. Knowing it was primary was all important for treatment, which so far appears to have been successful.

    I do think that more than 0.1% of economists should have been able to see the signs and symptoms of a looming banking crisis, and the likely consequences. Surely more than 0.1% of doctors can identify a great many cancers even in the very early stages. The vets did it with my cat.

    The failure of 99.9% of economists to see the obvious suggests that our basic understanding of macro economics is somehow flawed. Bezemer suggests that economists need to know more about accounting. They should be able to read a P & L statement. Evidently most of them can't. At least doctors can interpret blood tests and read X-rays.

    Steve Keen says he knows exactly what the neoclassical macro economists are doing wrong– ignoring private debt. He gives us a workable theory and methodology. I don't know for sure if Keen is right, but I do know that something's rotten with the great majority of macro economists.

    BTW I'm impressed that you know the number of MDs to four significant places.