Congressional representation and grant funding

Sergey Aksenov pointed me to this article by Deepak Hegde and David Mowery:

How do congressional appropriations committee members influence the allocation of federal funding for biomedical research? We investigated this question by studying congressional appropriations bills and appropriations committee meeting reports covering the 20 fiscal years between 1984 and 2003. . . . We estimated that the distribution of $1.7 billion of the $37.4 billion awarded by the NIH to extramural performers in the years 2002 and 2003 (appropriated during the congressional year 2001-2002) was influenced by representation on the HAC-LHHE subcommittee . . . We found that an additional HAC-LHHE member increased NIH funding for public universities in the member’s state by 8.8% (P < 0.01) and grants to small businesses by 10.3% (P < 0.01). HAC subcommittee membership had no statistically significant effect (at P < 0.01) on grants to private universities, large firms, or other nonprofit institutions.

This all seems reasonable enough, but I don’t buy the causality story. Couldn’t it very well be that congressmembers from states with more research are more likely to join this committee, in the same way that congressmembers from states with more fishing are more likely to join the Fisheries Committee, and so forth? I’m sure the authors tried to correct for this in some way but I imagine this pattern is so huge as to overwhelm their estimates.

4 thoughts on “Congressional representation and grant funding

  1. In Committee Assignment Politics in the U.S House of Representatives we demonstrate that the self selection hypothesis is, to a large degree, not supported by the data.

    In another paper (unpublished) we take a look at self selection for subcommittees of appropriations and likewise demonstrate that self selection is overrated.

    That's not to say that constituency variables should be omitted from the model, but their finding is not that surprising, all things considered.

  2. Economics papers that make it to Science tend to be bad. That's not a statement about this one, just an observation based on past experience.

    They did try to control for various forms of unobserved heterogeneity using fixed effects specifications in which grants to individual institutions were examined over time, but I didn't read the paper carefully enough to try to figure out how satisfactory these were. Even so, I'd classify this as more of a descriptive paper than one that seriously looks for causality. (The authors didn't attempt to exploit any instrumental variables / natural experiments regarding committee membership or seniority, which is what economists would want to see.)

    It's particularly hard to figure out whether their tests would address one's concerns because of the Science article style. The article that appears in Science is 2 pages long with very little detail on methods or background, and then there's an online supplement with all the detail but little general discussion. I want a discussion of the details!

    By the way, I skimmed it and I love that they put every econometric term in quotation marks. "Pooled least squares", "fixed effects", etc. I also enjoyed the fact that they found Senator Al D'Amato to be an outlier, and so ran specifications in which they specifically examined the D'Amato effect.

  3. Sean,

    I agree that the findings are plausible (as I wrote above, "This all seems reasonable enough"); I just didn't find the analysis particularly convincing (pretty much for the reasons Alex gives).

  4. Oh, and I forgot to add my misgivings about using grant or actual earmark data. The real details are in the ability of members to turn earmark REQUESTS into ACTUAL earmarks. Our research on this is currently in progress. So far we've spent most of our time on Military Construction earmarks but LHHS is a major target. Does SCIENCE take bad POLITICAL SCIENCE papers?! :)

Comments are closed.