Skip to content
 

Using Stan in an agent-based model: Simulation suggests that a market could be useful for building public consensus on climate change

friendlyphysician01

Jonathan Gilligan writes:

I’m writing to let you know about a preprint that uses Stan in what I think is a novel manner: Two graduate students and I developed an agent-based simulation of a prediction market for climate, in which traders buy and sell securities that are essentially bets on what the global average temperature will be at some future time. We use Stan as part of the model: at every time step, simulated traders acquire new information and use this information to update their statistical models of climate processes and generate predictions about the future.

J.J. Nay, M. Van der Linden, and J.M. Gilligan, Betting and Belief: Prediction Markets and Attribution of Climate Change, (code here).

ABSTRACT: Despite much scientific evidence, a large fraction of the American public doubts that greenhouse gases are causing global warming. We present a simulation model as a computational test-bed for climate prediction markets. Traders adapt their beliefs about future temperatures based on the profits of other traders in their social network. We simulate two alternative climate futures, in which global temperatures are primarily driven either by carbon dioxide or by solar irradiance. These represent, respectively, the scientific consensus and a hypothesis advanced by prominent skeptics. We conduct sensitivity analyses to determine how a variety of factors describing both the market and the physical climate may affect traders’ beliefs about the cause of global climate change. Market participation causes most traders to converge quickly toward believing the “true” climate model, suggesting that a climate market could be useful for building public consensus.

Our simulated traders treat the global temperature as linear function of a forcing term (either the logarithm of the atmospheric carbon dioxide concentration or the total solar irradiance) plus an auto-correlated noise process. Each trader has an individual belief about the cause of climate change, and uses the corresponding forcing term. At each time step, the simulated traders use past temperatures to fit parameters for their time-series models, use these models to extrapolate probability distributions for future temperatures, and use these probability distribution to place bets (buy and sell securities).

Gilligan continues:

We developed our agent-based model in R. At first, we used the well-known nlme package to fit generalized least-squares models of global temperature with ARMA noise, but this was both very slow and unstable: many model runs failed with cryptic and poorly documented error messages from nlme.

Then we tried coding the time series model in Stan. The excellent manual and helpful advice from the Stan users mailing list allowed us to quickly write and debug a time-series model. To our great surprise, the full Bayesian analysis with Stan was much faster than nlme. Moreover, the generated quantities block in a Stan program makes it easy for our agents to generate predicted probability distributions for future temperatures by sampling model parameters from the joint posterior distribution and then simulating a stochastic ARMA noise process.

Fitting the time-series models at each time step is the big bottleneck in our simulation, so the speedup we achieved in moving to Stan helped a lot. This made it much easier to debug and test the model and also to perform a sensitivity analysis that required 5000 simulation runs, each of which called Stan more than 160 times, sampling 4 chains for 800 iterations each. Stan’s design—one slow compilation step that produces a very fast sampler, which can be called over and over—is ideally suited to this project.

Yesssss!

Gilligan concludes:

We would like to thank you and the Stan team, not just for writing such a powerful tool, but also for supporting it so well with superb documentation, examples, and the Stan-users email list.

You’re welcome!

26 Comments

  1. Anoneuoid says:

    I was going to try it, but line 17 of main.R tries to read a file that doesn’t exist.

    > source(“assortativity_coefficient.R”)
    Error in file(filename, “r”, encoding = encoding) :
    cannot open the connection
    In addition: Warning message:
    In file(filename, “r”, encoding = encoding) :
    cannot open file ‘assortativity_coefficient.R’: No such file or directory

    https://github.com/JohnNay/predMarket/search?utf8=%E2%9C%93&q=%22assortativity_coefficient.R%22

    • Jonathan Gilligan says:

      Sorry about that. I also discovered and fixed some things that needed to be changed to bring the .stan files up to date with the latest version of Stan. You should be able to run it now.

      • Anoneuoid says:

        Thanks, I’ll see about giving it another go. BTW, what I wanted to try betting on is scenario C from here: http://pubs.giss.nasa.gov/abs/ha02700w.html. I’m not sure how easy that would be.

        • Jonathan Gilligan says:

          That should be straightforward. Look at the scripts prepare_date.R and data_and_reservation_prices.R. You would need to prepare a file with the CO2 concentrations for scenario C (see the function prepare_climate_data.R for how I did that with CO2 from the representative concentration pathways), and then edit in various places to add Scenario C to the RCP scenarios that I have coded in this model and ensure that the file you created with the Scenario C CO2 data will load when you specify “Scenario C” instead of “RCP 8.5” or such. Before running the ABM, which is time consuming for a full run, you can test just the climate stuff for your scenario C climate projections using test_climate_model.R.

          Be aware that we do not model physical climate processes here. We just use a phenomenological model that fits historical temperatures to historical CO2 assuming that T is proportional to log(CO2) with an ARMA(p,q) noise process, and then extrapolates future temperatures by applying this model to projected future CO2.

          In other words, this is primarily a model of human behavior, not a model of the physical climate system.

  2. Rahul says:

    Naive question. What does a “test-bed for climate prediction markets” mean? i.e. What exactly are we testing here?

    • Jonathan Gilligan says:

      Our plan, going forward, is to compare different kinds of market structures, under different models of how people change their beliefs, to investigate whether some kinds of prediction markets might be expected to be more effective at getting people to agree on a scientific question.

  3. Rahul says:

    >>>Effective climate policies require acting quickly, so it would be valuable to bring the public to a prompt and accurate consensus
    on the issue<<<

    Nitpicking, but one consensus-outcome would mean not acting at all, right?

    • Erik says:

      No, because it says “accurate consensus” and is written from the perspective of someone who is convinced by the evidence for climate change, that it is caused by humans and harmful. Though strictly speaking you could be convinced of all that, but still decide that doing something is too much work :)

      • Rahul says:

        Right. I’m convinced that it is humans too. But if I were to evaluate climate markets as a means to achieve consensus, I ought to be willing to consider the possibility that the majority chooses the other consensus.

        Ergo, is it a means to facilitate a consensus conclusion or the “right” kind of consensus conclusion.

      • John says:

        Well, not only that, you could decide that it is worse to do something. Why is global warming necessarily bad? I don’t think it is bad to everybody in all countries, it might actually be better for some countries.

    • Jonathan Gilligan says:

      That’s a fair point. The sentence you quote is from the introduction, which lays out why anyone should care about using prediction markets in this way. I would hope that if there were broader consensus among the public, then people could agree on what to do.

      This works both ways. If there were broad consensus that climate change is driven by solar variations, then I would guess that there would also be broad agreement that cutting CO2 emissions should not be a priority.

      And you are right that even if there were broad public agreement that CO2 causes most of the observed climate change, that would not automatically mean that the public would agree about what to do about it—many people might still think that it’s too expensive or difficult to cut emissions.

      So I think—just my opinion, of course—that broader public consensus on what’s causing climate change would help advance the policy discussions, but would by no means be sufficient to produce consensus on what to do, much less to produce consensus on some specific course of action.

      • Jack says:

        Prediction markets got the uncertainty wrong with Hillary and Brexit, events way way easier to analyze, I would they get it right with climate change?

        • Jonathan Gilligan says:

          This is a great question.

          We do not assume that prediction markets get things right all the time. What we do assume is that traders learn from the payouts. If you and I each had a different idea about presidential elections—say I thought that polls were a better predictor and you preferred structural models—and you won a lot more money than I did over a series of elections, I might decide that your model was better than mine.

          The value in prediction markets is that they efficiently share information. If most traders’ information is wrong, then the market will share that incorrect information, which can lead to big errors, such as the ones you point to. This is why we structured the market around 6-year predictions instead of, say, 50 year predictions. The short time frame allows for frequent reality checks.

          What my co-authors and I argue is that if people consistently lose money when their bets mature, they are likely to change their minds about what’s going on, just as many people who bet on “Remain” or “Clinton” are likely questioning the assumptions that led to those bets.

        • Jonathan Gilligan says:

          I would also question your assertion that it’s much easier to predict elections than climate.

          Consider this letter (http://www.nature.com/ngeo/journal/v6/n4/full/ngeo1788.html): Myles Allen and his co-authors made a prediction in 2000 of what climate would do over the next 40 years, and then returned after 10 years to check how his prediction was doing so far, and paying particular attention to testing estimates of uncertainty in the forecast. I don’t know of anyone who thinks they can predict elections ten years in the future with comparable accuracy to Allen’s climate prediction.

  4. Jack says:

    The paper starts with “Despite much scientific evidence”.

    Ok, what is this scientific evidence?

    I’ve been trying to find the main peer reviewed scientific papers that argue global warming is man-made. The best thing I found so far were a couple of lousy correlational studies that wouldn’t pass a simply smell test.

    Can someone here please enlighten me?

    What are the best scientific papers on this field? Are they only using lousy time series analysis or am I missing something?

    • Jonathan Gilligan says:

      Some of the best work on figuring out what is causing climate change is the line of research that emerged in the mid 1990s called “fingerprint analysis.”

      Different causes of warming produce different patterns in space and time (i.e., do daytime temperatures rise more or less than night time temperatures? Does the stratosphere warm or cool when the troposphere warms? things like that). Comparing observed patterns of warming to those predicted theoretically by different hypotheses allows scientists to rule out incorrect hypotheses when the patterns don’t match.

      One of the earliest examples was a paper by Manabe and Wetherald in 1967 (http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469(1967)024%3C0241:TEOTAW%3E2.0.CO;2), which predicted that if solar variations were the cause of climate change, then the stratosphere should warm or cool together with the troposphere, whereas if variations in greenhouse gases were the cause, then the stratosphere should cool when the troposphere warms and vice-versa. This was a clear prediction, and the subsequent half-century of data shows the troposphere warming and the stratosphere cooling, which is evidence for the relative roles of greenhouse gases and solar variation.

      Similar tests have been performed on daytime versus nighttime temperatures and many other patterns.

      Here are links to further reading which reviews the literature on fingerprint analysis and attribution of the causes of climate change:
      https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter10_FINAL.pdf (Section 10.2 explains the general approach, and 10.2.3 focuses on optimal fingerprint methods; see also, FAQ 10.1: Climate Is Always Changing. How Do We Determine the Causes of Observed
      Changes? Table 10.1 summarizes 33 different hypotheses the authors consider, what they conclude about the hypothesis, and how uncertain they are. And the references section lists around 700 hundred peer-reviewed papers that they consulted and summarized in this chapter)

      http://journals.ametsoc.org/doi/abs/10.1175/JCLI3329.1 This is more of a technical review paper of the first decade of optimal fingerprint analysis and attribution studies. If you’re interested in a more technical, mathematical discussion of the methods this will be more useful to you than the IPCC report.

      I hope this is helpful in answering your question.

      Disclaimer: I draw on work that others do in this area, but I do not actually do research on detection and attribution of climate change, so I am not an authority on these methods. Read what I say with that in mind.

  5. Jack says:

    For instance, if you go to NASA’s website, there isn’t a single sound scientific paper listed there (http://climate.nasa.gov/scientific-consensus/)

    The only thing they argue is that there is a consensus. Seriously, if this is the best argument, it is no wonder people are skeptical about this. They should.

    • Andrew says:

      Jack:

      You went to the page called “scientific consensus” (see your url!) so of course that’s where they discuss scientific consensus. If you go to the page called “evidence” you’ll see a discussion of evidence. I have no idea if the NASA website is the best source for information here, but if you are going to look at the NASA website, you might as well go to the appropriate page there.

      • Jack says:

        The page evidence is no good either. I still would appreciate recommendations of good papers. No irony, I really want to see the good hard scientific reasoning and not just toy simulations with toy models. Or just time series with correlations.

        If anyone has read those papers, please share with us.

        If none of you read those papers and still believe in that, stop and think for a while — should I ridicule skeptics if I myself never really put critical thought on this?

        If the best we have is what is there on NASAs website, we have to be honest and say that as scientists we should be skeptical. If we want to do something about it, even with all ambiguity, this is a political question — let’s not pretende it’s not. People talk about this as if man-made global warming is a proven scientific fact and then appeal to authority when someone disagree saying that “97% of scientists agrees with that”. I am a scientist and this kind of group think bully behavior just makes me sad about the profession.

        • Andrew says:

          Jack:

          NASA’s website gives an overview and a few links and does not purport to be exhaustive. Nobody except possibly you is saying that “the best we have is what is there on NASA’s website.” But, in any case, that website does have links to “hard scientific reasoning and not just toy simulations with toy models.” Even on that page there are references to journal articles on sea ice and various other topics. So I’m not really sure what you’re talking about in your first paragraph above. If you want to read more on the topic, I guess you could start with the articles linked at that NASA page and go from there.

          If you want some thing more bloggy, there’s this from Phil Price.

          The short story is that the picture on climate changes come from a combination of physical models and many different data sources. Reconstruction from any single data source can be difficult, as we discuss here.

        • Jonathan Gilligan says:

          I will point out that the prediction market simulation does not assume that CO2 is causing climate change. It gives equal weight to the hypothesis that variations in solar intensity cause climate change.

          The results suggest that if climate change is driven by solar variation instead of CO2, then the prediction market may be effective at producing consensus that solar variation is the cause. If you’re a skeptic about greenhouse gases and I believe that greenhouse gases do cause climate change, then whichever of us is right, a prediction market may be able to rapidly produce consensus on the correct cause.

          The whole point of my paper here is that if you do not find the peer-reviewed scientific literature persuasive, then opening a prediction market in which you and I can bet real money according to our beliefs may be more fruitful than you and me arguing about whether a certain research paper is persuasive.

          But do note my use of “might” and “may.” This is just a first study, and a purely theoretical one. The relationship between our results and the real world are very uncertain. The most we should conclude from it is that it would be worthwhile to study this aspect of prediction markets in more detail.

Leave a Reply