Can you account for this?

I’m speaking (remotely) to a roomful of accountants tomorrow. Exciting, huh? Actually, I don’t know if they’re accountants. They’re “accounting researchers,” whatever that means. . . .

The title they gave to my talk is “A statistician’s thoughts on registered reports.” There’s no abstract (and, of course, no slides) but they sent me this list of relevant issues:

1. Power is essential for any good empirical research. How effective is registration at improving power? The positive, given they way we are doing it, is that people can commit to gathering a lot of high-quality data, knowing that they will be published regardless of whether they support their results.

2. How helpful is it really to reduce HARKing [hypothesizing after results are known]?

3. How effective is registration in reducing p-hacking, especially given that so many of the interesting results in these papers are in supplementary analyses?

4. If you don’t like p-values and NHST [null hypothesis significance testing], is registration a step in the right or wrong direction. The way we’ve done it, everyone is spelling out hypotheses that are the focus of the paper. Is there a form of registration that gets away from this?

5. Where might Bayesian stats, thorough description, etc., fit in? Here’s one paper that explicitly incorporates Bayesian analysis . . . one of the very few in the field ever to do so: “Investor Behavior and the Benefits of Dispersed Stock Ownership” by Darren Bernard, Nicole Cade, and Frank Hodge.

6. We are pretty traditional NHST/frequentist. If you talk about alternatives to this, it will be the first time for many in this audience to think about them.

I think of accountants as pretty concerned with getting things right, and not over-claiming. So I can see how null hypothesis significance testing can be attractive to them (all that concern about keeping the error rate low, if the null hypothesis is actually true) but how it can also be a real trap, by pushing people toward that horrible alchemy where any statistical study, however noisy, leads to a deterministic statement about the effect being there or not. So it’s a real concern.

One thing I don’t really know is what accounting researchers do. Are they basically applied economists, writing econ papers in a particular area of application? Do they study the practices of real-world accountants? Do they try to come up with better ways of detecting fraud, etc? I guess they do all these things, along with others of which I’m not yet aware.

P.S. More here.

13 thoughts on “Can you account for this?

  1. A quip I recently heard that your last paragraph brings to mind:

    “An economist is someone who likes numbers but doesn’t have a good enough personality to become an accountant.”

  2. Based on looking through the journal, it’s economics research using accounting data. That gives it a particular flavor, with a focus on financial and related outcomes. But the scope includes text analysis of accounting statements and impact of accounting regulation.

    • It also occurs to me that the data source makes it amenable to a class of bayesian analysis I’ve suggested in the past in other contexts, where analyses are explicitly built and intended to have results updated regularly based on evidence that will continue to arrive.

  3. When I was in grad school, accounting PhDs were specialists within the Econ department. There were actually a couple of different thrusts I could see. One was, as you say, doing economicsy stuff with accounting data. Another was more interesting to me. A lot of game theory on how reporting incentives affected the allocation of capital and management effort.

  4. If you want some evidence why HARKing is misleading in a context JAR accountants will understand, here is a recent NBER working paper.

    “The anomalies literature is infested with widespread p-hacking. We replicate the entire anomalies literature in finance and accounting by compiling a largest-to-date data library that contains 447 anomaly variables.”

    http://www.nber.org/papers/w23394

  5. As the one who sent the list of questions, let me start by saying: this is why being an accounting professor is a pretty good gig: in addition to the usual barriers of entry in academia (you have to get a PhD), you must also be willing to have people poke fun at you. Many years ago I sat in the front row of a comedy club, close enough for Wayne Cotter to ask me what I did. He got a big kick out of my answer–“I’m getting a PhD in Acounting”. He asked what a dissertation topic would be, stroked his chin, and said “I’ve got it: “Da Vinci’s checkbook: A New Interpretation”. Not bad!

    We are a pretty eclectic lot, in both topics and method. At this conference alone, we have topics like:

    –How do students in online courses alter effort in response to feedback telling them they are in the top half (vs. top quarter) of the course?
    –How do markets reflect the fact that a firm provides financial disclosures that are easily extracted and processed with text analysis software?
    –How are executive job gaps lengthened by noncompete agreements and specialization?
    –Do student subjects spend more at Starbucks and support lower minimum wages when they are given some Starbucks stock?
    –Do accounting regulations lead or lag accounting scandals?
    –How do retail employees (and sales) change when branches share information about their creative endeavors (like creating in-store promotional posters)?
    –Does the quality of independent auditing decline when auditors can also be hired to providing consulting services?

    Methods and theories are also pretty eclectic. The conference includes:
    –Hand-gathered archival studies based on economics and finance
    –Field experiments based on psychology
    –Laboratory experiments using experimental economics or psychology (the Starbucks one, which lasted for months).

    You can also find surveys, game theory and the occasional simulation in the literature.

    More generally, accounting research studies any setting where people are (or should be) evaluating performance and providing incentives, assessing costs and benefits, limiting corruption or fraud, or making reports more trusted and worthy of that trust.

    Given this definition of the field, I see this experiment with Registered Reports being a natural direction for us–our own research tells us this should make research reports more trusted and worthy of that trust, by changing evaluation and incentives in helpful ways, and reducing the risk of fraud (defined loosely using Andrew’s version of Clarke’s Law–any sufficiently sloppy study is indistinguishable from fraud). Hey, if not us, who? And if not now, when?

Leave a Reply to Michael Bailey Cancel reply

Your email address will not be published. Required fields are marked *