Discussion of the value of a mathematical model for the dissemination of propaganda

A couple people pointed me to this article, “How to Beat Science and Influence People: Policy Makers and Propaganda in Epistemic Networks,” by James Weatherall, Cailin O’Connor, and Justin Bruner, also featured in this news article. Their paper begins:

In their recent book Merchants of Doubt [New York:Bloomsbury 2010], Naomi Oreskes and Erik Conway describe the “tobacco strategy”, which was used by the tobacco industry to influence policy makers regarding the health risks of tobacco products. The strategy involved two parts, consisting of (1) promoting and sharing independent research supporting the industry’s preferred position and (2) funding additional research, but selectively publishing the results. We introduce a model of the Tobacco Strategy, and use it to argue that both prongs of the strategy can be extremely effective—even when policy makers rationally update on all evidence available to them. As we elaborate, this model helps illustrate the conditions under which the Tobacco Strategy is particularly successful. In addition, we show how journalists engaged in ‘fair’ reporting can inadvertently mimic the effects of industry on public belief.

This is an important topic and I like the general principle but I wasn’t so clear what was gained from the mathematical model, beyond the qualitative description and discussion. I asked the authors, and O’Connor replied:

There are a few reasons we think a model is useful here. 1) The models can help us verify that the tobacco strategy might have indeed played the type of role that Oreskes and Conway claim it played. For example, the models show that in principle something as simple as sharing the spurious results of real, independent scientists might be able to prevent the public from figuring out that smoking was dangerous. 2) They can help us neatly identify causal dependencies in cases like this, which is especially useful in figuring out which conditions make it harder or easier for propagandists. For instance, we see from the models that larger scientific communities might be a bad thing in some cases, because the extra researchers are potential sources of spurious results. This is not initially obvious. 3) By dint of making these causal dependencies clear, they help us identify interventions that might help protect the public from industry propaganda.

63 thoughts on “Discussion of the value of a mathematical model for the dissemination of propaganda

  1. > The models can help us verify that the tobacco strategy might have indeed played the type of role that Oreskes and Conway claim it played.

    No, they can’t. Math is flexible enough that for any possible “role,” there is a mathematical model consistent with it. Consider the converse: Is there any possible math model which would prove that Oreskes and Conway were wrong? Or provide any evidence in that direction? Of course not.

    > They can help us neatly identify causal dependencies in cases like this

    No, they can’t. (Or at least they can’t until you start adding some real world data to the problem, at which point you are doing statistics.) You can not determine “causal dependencies” without data. You can only assume them.

    > they help us identify interventions

    No, they don’t. Of course, it is possible that, once we use some data, this particular framework might prove useful in estimating causal effects. But, like all the endless mumbo-jumbo coming from the Santa Fe Institute, there is little evidence that math/simulation — without actual data — helps us to do much of anything.

    • >>>Math is flexible enough<<<

      I feel this is crucial. I see a lot of models with so many knobs that it's hard to know whether they fit reality or could be made to fit one of a large number of realities.

      The model may still be useful but let's rid ourself of the delusion that this type of model "verifies" anything.

      • The ultimate example of these kinds of things is the global circulation model’s used in a variety of cases but especially climate prediction. These are literally millions of degrees of freedom nonlinear dynamical systems whose predictions fail to match reality for more than a couple days being used to say something about average global weather a century in the future.

        The basic 1 degree of freedom globally integrated model using just CO2 concentration and its effect on radiation that was proposed in the 1800’s provides just about as much information about the process, or more.

        • One very important paper related to Daniel and Rahul’s points is Roberts and Pashler, How persuasive is a good fit? In psychological review, 2000. Everyone should read it and act on it.

        • Good point.

          Climate scientists have massively complex models that are heavily tuned to fit the temperature record, but they largely ignore the simplest and most obvious model of temperature being driven by CO2 and sulfates (dust).

          https://www.youtube.com/watch?v=Sme8WQ4Wb5w

          The CO2 model was first proposed theoretically in the 1800s, and first fit to empirical data in the 1930s. So why isn’t it prominent in climate science today? I suspect it is because it isn’t alarmist enough. Plus, how much funding is there in updating a 75-year-old analysis?

        • Not going to argue about current climate models much, although I agree with the preference for simpler models that foster greater understanding and insight than massively (over)parameterized computational models. But no, the use of current global circulation models has nothing to do with being more alarmist! I suspect that what has happened over time is that skeptics raised doubts about the simpler models, which prompted more complex models to check outcomes. A good case in point is moving from single-layer to multi-layer atmospheric models. Also, climate scientists are interested as much (and now MORE) in the IMPACTS of temperature increase (i.e. on thermohaline circulation, Hadley cells, etc) than on refining climate sensitivity estimates, although that is still obviously important. To try and understand impacts and feedbacks, you need more complicated models.

        • This is my sense as well. Circulation models can tell you a lot about how complex things are and whether they are sensitively dependent on certain processes or not. So they can be useful but more for their ability to tell us how little we know than any accuracy they may give for prediction with a given set of parameter values

        • >>> the preference for simpler models that foster greater understanding and insight than massively (over)parameterized computational models<<<

          That was my understanding too all these years. But I'm getting confused with all the flak that Occam's razor seems to be getting these days.

          Are we demonstrating a preference for the complicated model over the simple?

        • I like the saying usually attributed to Einstein about how models should be as simple as possible but no simpler.

          I hate oversimplification, particularly common in social sciences, think of that model about China and air pollution for example. There’s a tendency towards linear regression to solve everything, and complicated models are just regressions with more terms! Structure is usually absent. If you want to study air pollution and life expectancy, you shouldnt be regressing life duration against lattitude across a 60 year dataset or whatever.

          At the same time if you want to study the effect of CO2 on global temperatures, but your model requires precise quantitative knowledge of the effect of pH salinity and temperature on the life cycle of plankton and their relative fitness among species so that you can predict evolution of the spatial distribution of plankton species… As one among thousands of things to be specified… You’ve gone off the rails even if I do acknowledge that yes qualitatively I agree that plankton are crucial for converting CO2 into O2 and biomass…

        • In defense of these models, it is common practice to do sensitivity analysis to determine robustness to parameter perturbations, particularly with respect to bifurcation behavior. Additionally, these degrees of freedom are not made-up quantities. These models are physical.

        • Last time I looked there were plenty of made up quantities, most of them related to complicated physical phenomena like cloud formation or primary production of O2 by plankton or distribution of iron by winds… Sure they describe physical phenomena, but it’s not like the models use aerodynamics of sand and dirt grains to directly calculate how iron is lifted off the ground and carried through the air into the ocean, or how it dissolves into the water or how individual plankton live and die and how they sink into the water, are eaten by whales or whatever.

        • You’d be surprised at the predictive power of physical theories on cloud formation (nucleation theory which is an area of my own research in a different context). Likeeise CO2 production by plankton or other organisms follows stochiometric conservation laws. When you look at things at the right scale they become predictable at an useful degree of precision. Your own ignorance of quantitative work in physics and biological physics is not support for your position of denialism.

        • You’d be surprised at the predictive power of physical theories on cloud formation (nucleation theory which is an area of my own research in a different context).

          I do not understand why someone would post this without providing a reference. Eg, look at how good the day-to-day predictions are “on this website”, or “made in that earlier paper and checked in this later one”, etc.

          Even to people who apply the argument from authority heuristic liberally you are just a random person on the internet and thus have zero authority.

          So, whats going on with the post?

        • @Anoneuoid As I mentioned, this is in the domain of “nucleation theory,” which is a very mature field that has been studied since the 1920s and before. https://en.wikipedia.org/wiki/Classical_nucleation_theory There are good quantitative treatments of it at different scales. In the scales that I work on I am mostly using differential equations in some mean-field treatment of the problem using modifications to what is called “classical theory.” However, there are models at other scales. Notably there is active research in stochastic treatments using density functional theory where they use many computational tools that would be familiar to a Bayesian scientist (in fact the physics world is where these tools originated).

          If you just google nucleation theory you will get a plethora of research papers over the last few decades. Again, this is an entire field of research and just because a phenomenon seems intractable to you as a statistician doesn’t mean that there are not good quantitative theories of it!

        • Your own ignorance of quantitative work in physics and biological physics is not support for your position of denialism

          Spoken like someone *completely* ignorant of my background…

          Denialism: I’m not sure what I’m supposed to be “denying”. This seems entirely in your own head. The only thing I deny is what Rahul denied “let’s rid ourself of the delusion that this type of model “verifies” anything”. Every run of the model predicts something which we can be *absolutely certain* will not happen (in the sense of each model run has probability zero of correctly predicting the future out to 100 years, heck even 100 days). It still can be useful to discuss, but it certainly doesn’t *verify* anything or provide anything like *authoritative* predictions nor can we reject all but one model or even all but 4 or 5 possible variations on the model or anything like that. If you asked 1000 scientists with a background in physics and chemistry and biology to write down models to be run your your magical infinitely fast computer, you would get 2000 substantially different models *at least* depending in large part on what processes each scientist really cared about: decay of wood products in rainforests, or the effect of coal burning policies on ground-level health of chinese citizens or whatever. But since computationally it’s very hard to write one of these models, for the most part everyone sticks to a few pre-made computer packages and tweaks this and that.

          Ignorance of quantitative work in physics and biological physics: I got my PhD in the research group of one of the top people in the world doing stochastic modeling of physical systems: Roger Ghanem at USC, and I took a full semester graduate course in Physical and chemical Oceanography in which I read multiple recent papers on biogeochemical cycles… So no, I’m not ignorant. It’s not my main area of research, but I know generally what people are trying to do, and I see it as mostly fooling themselves.

          What I am saying is that to model anything like the entire globe you need to manufacture parameters that describe the net effect of many many processes which are not modeled at any fundamental level in your dynamical circulation system. Sure they behave conservation laws and stoichiometry, but that doesn’t buy you much, you have to write down simplified dynamics for things like cloud nucleation, mixing of gasses in the surface of the ocean, and biological conversion and breakdown of chemicals, the effect of winds scouring dirt and sand out of the Sahara, emissions of CO2, Vulcanism, forest-fires, geopolitical negotiations on pollution emissions, transport of parasitic nematodes on oceangoing vessels and the resulting health of deciduous hardwoods in the Amazon basin, or whatever. When you start a deep dive there are gazillions of things left out of these models, and the fact that people start putting in x,y,z and the kitchen sink, their favorite effects, just emphasizes that fact.

          Can these things be tuned to provide a “useful degree of precision”? In terms of short-term dynamics, yes, you can predict something *like* the weather 5 days out. Inherently though, these systems are sensitively dependent on both initial conditions and accumulated solution errors. It is simply *not possible* to predict the weather even 6 weeks out and fundamentally *will never be possible* for something like 6 months. These systems have positive Lyapunov exponents and so they diverge *exponentially fast* from infinitesimally close trajectories. The question of whether multiple runs of these models out to decades or centuries provides in any way useful statistical information about Bayesian distributions over plausible futures is entirely and I mean *entirely* unanswered, and probably unanswerable at it’s fundamental level.

          People who do this stuff *simply don’t want to hear it* because they are fundamentally making a bundle charging the NSF for heavy duty computing time on expensive clusters of computers, and this is what NSF wants to fund because it *looks like science*.

          When you understand that a Bayesian distribution is a plausibility assignment under a model, and you have an enormously M-open domain such as climate prediction so that you aren’t even beginning to run multiple runs of multiple a-priori plausible models… sorry it’s a joke to say that it’s anything but “mathiness” in the sense of “yes you can do the run, but each and every run tells us something we *know ahead of time* will not occur. And, furthermore, given the computational intensity and the limited data to compare to, we can *never* in the next 1000 years come even close to deciding that any given set of parameters provides a good and useful probability distribution over events years in the future (in the sense of being of good use in a Bayesian Decision Theory analysis).

        • I also don’t deny that GCM type models have utility, by all means they can provide some guidance on modeling consequences for qualitative purposes. They just have the “mathiness” quality that Rahul and others have mentioned here… They can output pretty pictures that are extremely precise containing gigabytes of information. But about what?

        • @Daniel Lakeland it was not meant as a personal attack against you. No I don’t know your personal background, however, I find your dismissiveness of an entire field of modeling to be extremely flippant and not supported by evidence. Do you really think that climate modelers have not learned nonlinear dynamics and do not know about chaos? I know many of them – I have also sat in on many of their talks as well in meetings like APS March meeting and SIAM and various other topic-focused workshops for instance on UQ (a topic that your advisor seems to be in). I assure you that they know the pitfalls of modeling.

          As I’m sure you know, you formulate models to give predictions at scales that you are interested in. On the scales (both spatially and temporally) that are useful for climate modeling many of the effects due to many of the phenomenon you list are characterizable well-enough. Mesoscale modeling is difficult (weather prediction beyond a few days) but coarser scale modeling of the time horizons of climate modeling is possible to quantify even if your mesoscale model deviates from truth due to chaotic effects due to some averaging out of various factors.

          From what I have seen, the consensus climate models have had very good predictive performance over the last several decades.

        • For example, consider the following challenge. Someone drops a small buoy off the back of a boat somewhere in the middle of the ocean between LA and Hawaii, the buoy sends you its precise GPS position at 1 second intervals for an hour after hitting the water.

          Now, please using any GCM you like give me a probability distribution over the location and time where it first makes landfall that is usefully lower entropy than “somewhere along the pacific rim or on an island in the pacific some time in the next few years” and then we’ll follow the buoy and see if it winds up anywhere in the high probability region of your distribution.

          When you can reliably repeat that experiment for 10 or 100 buoys dropped at randomly selected regions in the ocean at various time intervals over the next year or so, and give substantially lower entropy predictive probability distributions compared to “somewhere on the coast of the world” and show that the actual landfall location is in fact in the high probability region of your probability prediction, I’ll be happy to entertain the idea that GCMs are anything other than mathiness.

        • @Daniel Lakeland and thankfully that particular problem has no bearing on determining something as coarse as climate because climate modeling is at a different scale than individual buoys. I’m sure someone could give you some sort of prediction though for that particular problem. I don’t know enough about ocean circulation to personally do it. In t\to\infty you go to an absorbing state but also you might get stuck in the pacific garbage patch for a while depending on where you start.

        • Plenty of people can give a predictive distribution, but can they also demonstrate that after say 2 years of tracking the actual buoy, the data point is anywhere in the high probability region of the predictive distribution that was given, and the predictive distribution is substantially more precise than what you could get by just asking an oceanographer to guess where it will be?

          Just tesselate the earth to say a scale of 10km and give me 1000 draws of the form (julian day, tesselation tile ID), with the location being either the point of landfall, or if it hasn’t landfalled at the end of the second year the location on that day. You can have all the weather data you want for the year leading up to the buoy drop (but no weather data after that).

          This is just *one* degree of freedom in a ~ 3.7 Billion degree of freedom model (there are something like 3.7 billion possible (day,tile) combos). If you can’t put a useful probability distribution over this one question out to 2 years, why should anyone place any trust that a probability distribution over say the average summer high temperature in Bangladesh in 100 years has any meaning?

        • Josh, it seems some of your comments were held for a while, I wasn’t seeing them until this morning, so I apologize if we are talking past each other a little.

          First, I agree, of course all those guys in GCM modeling, and SIAM and soforth know about nonlinear chaotic dynamics. I do however find it rare for anyone to really understand the fundamental concept of Bayesian statistics, this is true *even in statistics* and sometimes *even for practitioners of Bayesian statistics* and *particularly true* of mathematicians with probability theory backgrounds, so it’s not surprising.

          Most people seem to confuse the frequency of occurrence and the probability (weight of credence) concepts. With GCMs the model runs are so computing-time intensive that it’s extremely rare that you get anything like a Bayesian posterior over anything at all. The best you might get is something kinda like a prior-predictive distribution that’s been kinda hand-tuned to be non-stupid. It’s just computationally ridiculous to expect anything else.

          The fact that people know that a thing is problematic doesn’t keep them from doing it for money. If someone wanted to pay me millions to dig holes and fill them back in, I would probably do it, even though it’d be clear to me that I was not doing anything helpful to society, I’d put my hope in using the money later to do something useful. I’d particularly do it if I actually really enjoyed digging.

          But I don’t think GCM practitioners would be doing it if they thought it was wholly useless. The utility however comes from a variety of indirect sources (examples in no particular order):

          1) Political: if you think policy needs to change, and your GCM runs will help affect the political change you want to see, you might do it.
          2) Computational: many tools are built to run complicated simulations (mathematical and computational tools), perhaps the real value comes from the tools rather than the particular simulation runs. Someone who is interested in UQ for example doesn’t really need to care what the physical system is, it just gives them a chance to work on the techniques for UQ.
          3) Monetary: bringing money into a university to set up computing facilities may be good for the university, good for other researchers, etc
          4) Enjoyment: lots of people get into academia because they enjoy working on the problems regardless of how practical they are, particularly often people in more pure mathematics or theoretical physicists.
          5) …. etc

          If coarse scale climate GCMs are actually good for predicting results *as decision tools* I’d be happy to know it. Here is a more “climate” oriented challenge: take all the weather data you can get for the last week, and using that set up and runs of your GCM, give me a probability distribution over the weather based crop and building damage in 2020 on a tesselation of the earth into 1000 tiles (storm, wind, flooding, etc). Also, we’ll ask insurance companies to simply predict the same thing based on just regressions on historical data. I’ll hold onto that prediction and we’ll see what happens in 2020 and if the GCM provided substantially more specific information than just a regression based on the past. Next we can do a regression of the past plus some kind of zero-order model like the 1890’s type model involving CO2 and sulfates and fit that in Stan, and include it. Does a teraflop-year of computing add anything to the predictive power compared to 3 or 4 degrees of freedom of physics and a decade of insurance claim data?

          https://www.skepticalscience.com/climate-models.htm

          according to that website which bills itself as “skeptical of the skeptics” GCM informed IPCC predictions put the actual sea level rise firmly into the tail of the predicted distribution in back-testing. Apparently GCM based predictions are under-estimating sea level rise. As someone who wants to make decisions, consistently biased under-estimates costing teraflop-years of computing don’t seem particularly helpful. Note, i’m not denying climate change, I’m upset that huge quantities of computing couldn’t give a better predictive distribution so we could make good decisions. They seem like extremely specific precise predictions, if reality isn’t really in the high probability region of the predictions, how should we use those predictions in decision making?

          Taking thousands of people’s time and effort away from *good predictions* and towards *big computing* primarily for prestige and money and smart-people’s entertainment type reasons doesn’t really excite me as a good use of societies limited resources.

          I’d like to see less “mathiness” and big computing and more small-degree-of-freedom physics-based decision models with impressive back-testing that are also impressive in true transparent future prediction (ie. where people are willing to use them today to produce Bayesian posterior-predictive probability distributions over important quantities 3 to 5 years in the future and publish those predictions and wait and show that they come true)

        • Literally first page of google search results! This field is a very mature field matching first-principles modeling and observations. I do not work specifically on cloud formation but I work on other nucleation problems, mostly in biology. Here is a review of open problems in nucleation: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4919765/

          Ok, we seem to be making progress. Can you point to a prediction of the theory you consider especially impressive that is mentioned in the review article you linked?

          @Anoneuoid I am not interested in revealing my identity, particularly since I currently work for the federal government. However I am a Ph.D. applied mathematician of the physics-y type and I do first-principles based modeling of phenomena.

          Yes, please do not reveal your identity if you don’t want to. I’m not someone who puts much stock in the argument from authority heuristic, so it really wouldn’t matter to me.

          When you ask me to give one example of predictive power of the theories that is a very weird request to me. It is an entire field where the theory is constrained by experimental observation.

          I think its strange no one has ever asked you this before, or that you never went to look up “just how well are these models working anyway? How do they figure that out? Etc.”

          Just because a theory is “constrained by experimental observation” does not mean it is making impressive predictions. I listed some common possibilities here:

          Maybe these “pre-dictions” are all actually “post-dictions”, maybe the predictions are exceptionally vague (eg, “more dust” leads to “more clouds” rather than “less clouds”), maybe there are lots of impressive predictions but the data is too noisy to really check them, etc. I don’t know.

        • Does a teraflop-year of computing add anything to the predictive power compared to 3 or 4 degrees of freedom of physics and a decade of insurance claim data?

          https://www.skepticalscience.com/climate-models.htm

          according to that website which bills itself as “skeptical of the skeptics” GCM informed IPCC predictions put the actual sea level rise firmly into the tail of the predicted distribution in back-testing. Apparently GCM based predictions are under-estimating sea level rise. As someone who wants to make decisions, consistently biased under-estimates costing teraflop-years of computing don’t seem particularly helpful.

          This argument seems to convey a fundamental misunderstanding of model uncertainty and predictive distributions. If I tell you a value follows a normal distribution and you draw a single sample that falls in the tail of that distribution, it does not prove the assumed distribution is right or wrong. The modeling procedure is not “consistently biased” just because the observed sea level rise falls on the edge of the confidence interval. There is likely high autocorrelation in the predicted sea level rise (e.g., lines with different slopes in the expanding cone of uncertainty). Therefore, it is fairly obvious that if the mean model prediction is initially lower than observed, then it will be lower than observed in the future as well. A simplified interpretation of this example is that there is uncertainty in the rate of sea level rise and the observed rate (1 sample) falls in the tail of the predicted distribution of possible rates. If you wish to make decisions under uncertainty, then presumably you could draw different realizations from the model predictive distribution and consider the impacts under each scenario. If you are “someone who wants to make decisions” under uncertainty, then I hope you are not just considering the mean prediction.

        • > This argument seems to convey a fundamental misunderstanding of model uncertainty and predictive distributions

          No, I understand this very well. A Bayesian predictive distribution is better when it places the actual outcomes in the relatively high probability region. Comparing two *models* the one that puts the actual outcome at higher probability is a better model. You can’t “disprove” a model by a single draw from its predictive distribution, but if you have two models one of which predicts the actual outcome as very likely, and one of which predicts the actual outcome as unlikely, you should favor the one that predicts the actual outcome as likely, this is basic Bayes.

        • …consistently biased under-estimates costing teraflop-years of computing don’t seem particularly helpful

          You weren’t making a statement about comparing two models. You were claiming that a particular model was “consistently biased” without any evidence.

          I am honestly not sure what point you are trying to make. You link to a website that debunks the myth that climate models “are full of fudge factors”. Then, you use that to support your claims that climate models have “plenty of made up quantities” and that things are “hand-tuned to be non-stupid”. I assume you simply enjoy trolling the comments section.

          If you have a better climate model that doesn’t require significant computing power, then please share that with the world. Based on your confusion between climate and weather, your need to reference your PhD adviser to lend credibility to your statements, and your misunderstanding of model bias, I am more skeptical of your outlandish claims than I am of complicated computational climate models.

        • > You weren’t making a statement about comparing two models

          Of course I was. The whole point of my comments was to compare this GCM type of model to other possible ways of dealing with climate prediction, including simplified small degree of freedom models combined with historical data for example. The assertion was (by Rahul):

          “I see a lot of models with so many knobs that it’s hard to know whether they fit reality or could be made to fit one of a large number of realities.

          The model may still be useful but let’s rid ourself of the delusion that this type of model “verifies” anything.”

          And I pointed out that I felt GCMs were pretty much in this boat as they obviously are flexible enough to potentially model Mars, Venus, Jupiter, Neptune, extrasolar planets, and by the way Earth or any number of close to earth but not earth planets. They have literally millions of degrees of freedom including initial and boundary conditions and modeling choices made regarding all sorts of processes such as cloud formation or gas mixing or biological processes or whatever.

          I think they qualify as having “so many knobs that it’s hard to know…” how well they really work, and they are so compute intensive that there’s no real way to get a proper Bayesian posterior distribution, we wind up with something that is doing a barely acceptable job of tracking reality even by the people I quoted who support the models, as witnessed by graphs showing that key predictions are at the very edge of the range of values expected from the output of these models.

          As for trolling and etc,

          You’ll find that I’ve been here at the blog participating in the discussion continuously on an almost daily basis for about 13 years, so you’ll have to excuse me if I get frustrated with people showing up with clearly *zero* knowledge of my background and calling me a troll, a “denier” or “ignorant of quantitative work in physics…” having to call out my background because of personal attacks on my character is not something I enjoy having to do here at Andrew’s blog. I’m used to it being less personally confrontational, but I also realize that I’ve touched on a bunch of bugaboos here, including the highly politicized climate science scene. I will just stop now because I don’t think going on more really benefits anyone. I’d just like to also point out that above I have already said that GCMs have their uses and they are a legitimate area of study, and I agreed with Chris Wilson on the generally political nature of how they came to be the focus of much of climate science:

          http://statmodeling.stat.columbia.edu/2018/08/11/discussion-value-mathematical-model-dissemination-propaganda/#comment-846842

          http://statmodeling.stat.columbia.edu/2018/08/11/discussion-value-mathematical-model-dissemination-propaganda/#comment-845792

          They simply are also one of those areas that have the “flavor” of what Rahul originally called out.

          fin

        • Daniel:

          If you go to the Skeptical Science website you linked to, you will find that you are using at least 3 very common arguments of climate change deniers:

          6) Models are unreliable – This is your so many knobs theory
          32) Climate scientists are in it for the money – This is where you question the motivations for building climate models including financial incentives
          62) Scientists can’t even predict the weather – This is where you compare the complexity of predicting the weather to predicting the climate

          Maybe you agree with these arguments. Maybe you’re ideas are subtlety different because they have something to do with Bayesian posterior distributions. The subtlety was lost on me. It is not unreasonable for somebody reading this blog to assume you are a troll and/or climate change denier when you want to rehash these old arguments.

        • Daniel, I think it’s worth pointing out why so many people think you’re a troll.

          Between you and Anoneuoid, it’s nearly impossible to have a well informed discussion on this blog. I try hard to be firm about the things I know about and open to learn more about the things I don’t. Countless times already, I’ve gotten into discussions with the two of you and it’s clear you don’t understand the topic, but are completely unwilling to give ground due to your strong hunches. I’ll ask for your evidence and all I get is “well, it seems to me that…” and yet no attempt to engage further in the other possibilities.

          It’s really sad because I think Gelman brings up very important topics that the academic community *needs* to start discussing. In particular, there are plenty of flaws in how things are done and I agree with what I believe is Gelman’s main point: we need to start being more critical of research in general. Sadly, the comments section are so filled with hard headed uninformed opinions and shouting others out that I don’t see this blog as a place to have a reasonable discussion.

          But that’s the downsides to blogs I guess.

        • Well you’ll have to accept my apology as well then if I’ve been too subtle or had my own take that seems obvious to me but isn’t well articulated, because I’m emphatically NOT a climate change denier.

          In fact my concern is that climate change may be happening much more quickly and with much more impact than is generally acknowledged, and that the emphasis on GCMs and the political and financial reasons why they became a major area of research (see chris wilson’s comments I linked previously) has led to poor information for decision making.

          Here at the blog we’ve seen a decade or so of clear cases where *scientists are in it for the money*. This is a common problem throughout science, not just climate, and it isn’t even a character flaw in individuals or anything like that, it’s pure survivorship bias. *Universities* are in it for the money, that is undeniable, and universities simply don’t bring people in, or keep people around (on average) unless they bring in reasonably big bucks compared to others in their field. This is far from being just a problem for climate change.

          So, if you sit down and single-handedly work for a year or 5 carefully collecting a data set, creating a 3 to 10 dimensional globally integrated differential equation model, and a means to fit that model to a wide variety of data sources through a Bayesian computation in Stan, you can do this for something like $50k and you can never get a grant to do it for $25M. whereas if you write a grant to fund a large supercomputer for your university and spend 25% of the first 5 years running many repeated runs of GCMs you *can* get a grant to bring in $25M and guess which researcher university administrators choose to hire straight out of their postdocs?

          Those arguments about GCMs and being in it for the money and soforth are not wrong. But the conclusion “So therefore climate change isn’t occurring” *simply doesn’t follow from them*. What maybe does follow though is that *so therefore we don’t have good decision making tools* and that’s something I think *should* be remedied by moving smart people’s effort away from GCMs and towards simplified models that produce better decision making tools.

          To me, GCMs are primarily of interest in so far as they allow you to ask *qualitative* questions about various phenomena, and thereby constrain your modeling activities during the process of making more simplified tractable and decision-informing models.

        • Also the “scientists can’t even predict the weather”. Predicting the weather is hard inherently so the word “even” is unfairly dismissive. I totally agree with that. But the fact remains that we can’t predict the weather.

          GCMs are coarse grained solutions to the same kinds of equations needed for weather prediction, a major component being Navier-Stokes equations, radiative heat transmission equations, and reacting flow equations (to say nothing of biological processes).

          Coarse-graining a solution does not in any way guarantee that it remains a valid description of anything physical. The equations such as Navier Stokes or radiative heating or whatever are what’s called *intermediate asymptotics*. They are valid in the sense that they give answers which are inherently not sensitive to the scale of observation *for a certain somewhat broad range of scales*. (See GI Barenblatt’s entire career for discussion of intermediate asymptotics)

          The scales used must be considerably larger than a molecule, and considerably smaller than the smallest “macro-feature” such as a vortex etc. For fluids, this means something between like 50x the mean free path of a molecule in the atmosphere, and maybe 10000x the mean free path of a molecule in the atmosphere. This mean free path is something like 68nm according to wikipedia, so the Navier Stokes equations should give reasonably valid solutions for “grid points” between say 3400nm (3 microns) and 700000nm = 0.7mm apart. You can get somewhat better results using say spectral methods because of their exponential convergence properties but this is computational tricks to get the required spatial resolving power. But the main thing is that that range of scales is consistent with what is observed in aerodynamic simulations. The roughness of a golf ball surface matters for the flight of the golf ball in a substantial way, and if you adjust the dimple shape at sub-millimeter scales you get different results that vary in the second decimal place (how far it flies or how fast it curves when spinning etc), and if you want to represent the dimple shape accurately you’ll need a model of a golf ball that does in fact have details at sub-millimeter resolution. The Navier Stokes equations for air are *inherently* sensitive to features at 1mm scale but insensitive to features between say 3 microns and 100 microns. That’s a pretty amazing range of scales, but it’s nothing like 10km.

          As soon as you want to coarse grain this to a couple of tens of kilometers resolution (a factor of 10 million more coarse?), in order to make it correspond in any way to the reality of the earth, you will have to do a lot to your model which deals with the issue of renormalization group transformations (among other things). The viscosity of the 10km resolved atmosphere required to get it work well is *nothing like* the viscosity of air. Of course very smart people in computational fluid mechanics are more or less aware of this and have concepts like “eddy viscosity”. But beyond just that single example you’ll have huge problems making the mapping between what you’re simulating at this scale and what actually occurs *in the world* be in any way meaningful and convincing. This is just an inherent problem with using Navier-Stokes type equations at 10km coarse scales. The only way you can make it work is to back-test and do a lot of “fitting”, and to make your argument logically convincing *to me* you’ll need a Bayesian posterior distribution over the various fitting parameters which tells me *how much you know* about which simulations *do* and which *don’t* match reality. But that Bayesian posterior is *so far from what you can actually accomplish* computationally, that we simply can’t get there from here, not even close. At best we can pick a few values for the various fitting parameters and run a few runs. Maybe a few hundred or a thousand. But see how much effort Stan has to go to in order to fit much simpler models with only say 100 parameters.

          It does Hamiltonian dynamics *on the simulation*. Meaning a few thousand evaluations of the whole simulation *for every sample it gives* and a few hundred to thousand samples before they get into the typical set, and then a few thousand evaluations within the typical set to give a meaningful posterior. In other words, you’d expect to have to do something like *many billions of runs of a full 100 year GCM model simulation instrumented to give derivatives with respect to all the tuning parameters including all the initial conditions* in order to accomplish the task. If your GCM for 100 years takes even a day to run, you can’t supply me with a convincing Bayesian posterior to tell me how much you know about which simulations to trust any time before 20 million years from now.

          And, in fact, the graphs I previously linked look exactly like what I’d predict: based on a sparse number of a few hundred or a thousand GCM runs, some range of possibilities is predicted, and then the actuals turn out to be just barely within that range. It’s not a good set of information for decision making, it represents only a very little more than the prior distribution assigned by climate scientists. Which is not useless, but isn’t anything like what it’s made out to be: a quantitative decision tool.

          As I’ve said above, the GCM is a great tool for qualitative exploration of dynamical systems. For example, you tune a GCM to give something *like* what the earth does over the last 10 years without worrying about precise quantitative accuracy, you then change the conditions of the run to eliminate 100% of all human CO2 output, and see what *kinds of things* the system does differently. Or you add in an alternative model of cloud formation, or you start burning forests more rapidly, or whatever. It’s not quantitative, but it may tell you something qualitative, like for example does the ocean circulation look different in the two scenarios? Does the sea level rise change appreciably? What physical effects seem to induce those changes most strongly? etc.

          Those are useful things to do, but as Rahul said “let’s rid ourself of the delusion that this type of model “verifies” anything”

          I hope i’ve explicated my point better. I care a lot about climate change, and also about people around the world, and so I’d like us to create informative models of climate change that can help us balance the economic outcomes of people now with the environmental outcomes of people in 10, 50, 100, or even 1000 years. To get there I believe we need to use the qualitative knowledge we can get from GCMs to build quantitative data-informed models that give meaningful posterior distributions for decision making, and I don’t think we’ll get there by pretending that GCMs are quantitative models capable of informing decision making in a good way.

        • Nat said to Daniel,
          “It is not unreasonable for somebody reading this blog to assume you are a troll and/or climate change denier when you want to rehash these old arguments.”

          I don’t think anyone who reads this blog often would come to the conclusion that Daniel is a troll or a climate change denier. (However, he has yet to learn that it might be a good idea to include a TLDR summary of his longer posts.)

        • Nat said to Daniel,
          “It is not unreasonable for somebody reading this blog to assume you are a troll and/or climate change denier when you want to rehash these old arguments.”

          I don’t think anyone who reads this blog often would come to the conclusion that Daniel is a troll or a climate change denier. (However, he has yet to learn that it might be a good idea to include a TLDR summary of his longer posts.)

          Watch out! If you don’t brush your teeth the climate change denier hiding under your bed will stuff you full of coal.

          Seriously though, what does “climate change denier” mean? A person who believes the climate does not change? That humans have zero effect on long term weather trends? That humans have a negligible influence? That the degree of influence and eventual outcomes remain very uncertain?

          Are there guidelines somewhere for when the term “denier” applies? How much freedom for independent thought are people allowed before triggering that label?

          I think from Daniel’s posts he is actually a “denier”, in the sense that he doesn’t just accept whatever he hears from “experts” about climate issues and has his own thoughts on the matter.

        • @Anoneuoid As I mentioned, this is in the domain of “nucleation theory,” which is a very mature field that has been studied since the 1920s and before. https://en.wikipedia.org/wiki/Classical_nucleation_theory There are good quantitative treatments of it at different scales. In the scales that I work on I am mostly using differential equations in some mean-field treatment of the problem using modifications to what is called “classical theory.” However, there are models at other scales. Notably there is active research in stochastic treatments using density functional theory where they use many computational tools that would be familiar to a Bayesian scientist (in fact the physics world is where these tools originated).

          If you just google nucleation theory you will get a plethora of research papers over the last few decades. Again, this is an entire field of research and just because a phenomenon seems intractable to you as a statistician doesn’t mean that there are not good quantitative theories of it!

          I am not a statistician. I have pretty much zero formal training stats, but also none in nucleation theory. I love studying all positive examples of science though, so if you do have an example of this predictive power in action I would like to see it.

          I’m only skeptical of your claim because if I mentioned “the predictive power of physical theories on X”, then I would have at least one example of an impressive prediction ready to go and wouldn’t be linking to high level stuff like wikipedia pages.

          So, I suspect there is something wrong with your claim. Maybe these “pre-dictions” are all actually “post-dictions”, maybe the predictions are exceptionally vague (eg, “more dust” leads to “more clouds” rather than “less clouds”), maybe there are lots of impressive predictions but the data is too noisy to really check them, etc. I don’t know.

        • @Anoneuoid I linked to wikipedia because I (reasonably) did not assume that you are trained in statistical physics. Again, just go ahead and google the terms nucleation+theory and nucleation+theory+clouds. Have fun. There are some nice mathematical problems there which was the source of my original interest in the topic.

        • @Anoneuoid I linked to wikipedia because I (reasonably) did not assume that you are trained in statistical physics. Again, just go ahead and google the terms nucleation+theory and nucleation+theory+clouds. Have fun. There are some nice mathematical problems there which was the source of my original interest in the topic.

          Sorry, but it shouldnt be at all hard for you to give an example of an impressive prediction based on what claimed originally. I’m not going to go dig through millions of pages of stuff to find one to support your claim.

          This doesn’t mean there are not impressive predictions made by that field but I doubt you are actually familiar with any of them if they do exist. Please refrain from calling others “deniers” and “ignorant” in the future if you won’t/can’t back it up.

          I hope you understand this is not some personal attack, I simply can’t understand why you would not have provided a reference to one otherwise.

        • Literally first page of google search results! This field is a very mature field matching first-principles modeling and observations. I do not work specifically on cloud formation but I work on other nucleation problems, mostly in biology. Here is a review of open problems in nucleation: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4919765/

          When you are at the scale of aggregate cloud formation in the atmosphere, the theories are very good. The difficulties are always in the small scale (time,space) when you cannot rely on some common simplifying assumptions such as quasi-equilibrium, large volume, mean-field, etc.

          As relates to climate modeling, the physical sub-problems that go into those models are all things that we can get a good handle on through experiments AND first-principles modeling. The space and time scales of climate modeling relative to our instruments and the underlying processes makes the math much simpler. If you want to critique the components of climate models you can, one by one. Each of the components has underlying physics that is self-consistent and consistent with experimental and observational evidence.

        • @Anoneuoid I am not interested in revealing my identity, particularly since I currently work for the federal government. However I am a Ph.D. applied mathematician of the physics-y type and I do first-principles based modeling of phenomena. When you ask me to give one example of predictive power of the theories that is a very weird request to me. It is an entire field where the theory is constrained by experimental observation. Just go on google and choose one. Choose one of a topic of interest to you. Choose one on clouds. Whatever.

          My personal experience is in systems where classical nucleation theory fails because of violations of the kinetic picture that underlies the classical theory. I am a mathematician of the analyst type so I like the math that I can get out of these problems. My theoretical work is constrained by observational evidence same as any physical theory is.

  2. D Kane: This slide rendition might help (might do a post on this subject someday).

    Scientific context in which Modelling and Simulation arise.

    I Domain knowledge: Insight into what matters, how and why.
    II Modelling insight: Capturing the essentials of what matters in the representation (model).
    III Simulation engineering: Accurately and efficiently running and learning about the representation.

    Domain knowledge
    / \
    Modelling ability ——- Simulation engineering

    So we have reality beyond direct access we want to know about, an abstract but fully specified/known representation (model) of reality and an accurate as time allows means to learn about the representation.

    All three need to inform and check each other to lessen misunderstanding (being frustrated by reality when we act).

    • Now a great question posed at the end of the news article to which Andrew provides a link.

      ‘But given how powerful selective sharing seems to be, the question now is, who is likely to make effective use of Weatherall and co’s conclusions first: propagandists or scientists/policy makers?’

      It seems to me that one would have to construct very detailed logs and decision trees to begin to answering this question. Sometime published accounts do not accomplish this in ways that stick. So one absorbs selectively too.

      Moreover, I don’t know what is precisely meant by ‘propagandists’ b/c, increasingly, expertise has been funded by special interests. So I suggest that perhaps there can be a precising definition that better illustrates the contrast intended.

      • Good point about what is meant by “propagandist”.

        Weatherall wants us to read “propagandist” as “industry spokesperson”.

        But, as Andrew’s blog makes excruciatingly clear, “propagandist” includes many academics, journalists, and activists as well. For industry spokespersons, journalists, and activists, the proportion of propagandists appears to be close to 100%.

        • Thank you. I gathered that was the implication. But then why don’t we use the term ‘industry spokesperson’. There would then be less utility in characterizing it as a contrast between propagandists and expertise. What I mean is that quite often there are insertions of loaded labels that skew discussions of scientific research efforts. Academics would be insulted if we labelled them as ‘industry spokespersons’ too.

        • I remember being little and throwing a pack of cigarettes in the trash because they were “bad”. Does anyone really think that was due to something other than propaganda/brainwashing? That a 7 year old was convinced by the epidemiological evidence?

        • Anon:

          I think it’s a bit extreme to label anti-smoking messages as “propaganda” or “brainwashing.” Sure, the messages that are sent to kids will be simplified, but in this case the message is, ultimately, evidence-based.

        • I don’t think the ultimate reason for the message comes into play when defining something as propaganda. Its more about the distribution of the message and how its conveyed. I doubt any direct state-level influence on children’s thought processes could be effective besides propaganda. I’m sure many people would argue that children should not be receiving any sort of message directly from the state to begin with.

          Originally this word derived from a new administrative body of the Catholic church (congregation) created in 1622, called the Congregatio de Propaganda Fide (Congregation for Propagating the Faith), or informally simply Propaganda.[2][6] Its activity was aimed at “propagating” the Catholic faith in non-Catholic countries.[2]

          https://en.wikipedia.org/wiki/Propaganda

          Based on the premise that the bible is the word of god, the church was attempting to spread the word of god and get more people into heaven. What could be negative about that?

          Seems the same as:
          Based on the premise that smoking cigarettes has a net negative effect on peoples health, the public health authority was attempting to spread the warning of negative effects and get more people to live longer, healthier lives. What could be negative about that?

        • Cigarettes were initially billed as an image making enhancement. I am sure that there was very little condemnation from some physicians b/c they may also have been smokers.

          It’s all in categorizing labels. My point is that even a blunt description like ‘propagandist’, cast in a dichotomous fashion can mislead us to dwell on 2nd order questions and answers.

  3. Isn’t this a quaint example of “mathieness”?

    The crux of the underlying explanation was already amply illustrated by the qualitative descriptions. Yet casting it into a mathematical formulation somehow gets it more attention and credibility?

    • Good point.

      I’ve seen a ton of papers with mathematical models tacked on where I wondered what the purpose of the model was.

      The best explanation I could come up with (which is none too compelling) is that a mathematical model forces the author to be clearer about the argument. What variables are important? How are they important? How do they interact and produce the hypothesized result? What variables are left out? What results would contradict the model?

      This is related to Andrew’s posts about the story of the Italian troops lost in a snowstorm in the Alps and how they were saved by acting decisively on incorrect information. It allows pithier discussions of the arguments and allows critics to succinctly propose counter-arguments.

      • This made sense in traditional (say) physics models. Introducing math gave the hypothesis a distinct “attack surface”.

        One makes predictions which can then lead to a falsifiable model.

        That’s exactly the problem with these modern models with too many knobs: An excess of ad hoc parameters makes them difficult, almost impossible, to falsify. Essentially you have an infinitely variable goalpost. No matter what empirical evidence you throw at it, it just won’t stick.

        • I don’t think its anything new. That was the strategy used to keep epicycles going.

          It was also Lakatos’ concept of a “degenerating research programme”, wherein “the data leads the theory” so evermore ad hoc adjustments are made and nothing that would be otherwise surprising is ever predicted correctly.

        • That is one reason why I lack Occams razor. A simpler model is typically easier to falsify.

          A complicated model with a dozen variable parameters can usually wriggle away from any challenge.

        • Rahul, thanks for that very useful phrase, attack surface. Very evocative phrase, shows how important it is to name something. I didn‘t know it was a phrase used in software security.

    • Rahul, I‘m surprised to see an argument against a formal model. At least in my field, verbally stated explanations have so many hidden degrees of freedom that one can easily brush aside counter-evidence.

      • I’m targeting a particular class of formal models which are so flexible that they are hard to disprove in any meaningful way.

        Take this example: Can you explain what sort of data could falsify this model?

      • To clarify, in a verbal model, we usually are aware of their lack of falsifiability flaw.

        Mathiness, is having mathematical models which seem like they would be “hard” and provide opportunities for disproving them but are in fact structured in such a pliable manner that they are hardly an improvement over the vague, qualitative model.

  4. The propaganda model Weatherall et al. propose is a bit of two-edged sword.

    On the one side are the industries that funded climate denial, and on the other are the academic and activist industries that funded climate alarmism. Both sides acted less than honorably. The latter’s circling of the wagons around the “hockey stick” was a serious blunder IMHO.

    https://www.youtube.com/watch?v=BuqjX4UeBYs
    https://www.uoguelph.ca/~rmckitri/research/McKitrick-hockeystick.pdf

  5. As someone who for the last decade has been accused of deploying the “tobacco strategy” in every case in which my opponent is riding a horse named “Statistically Significant”, here’s my perspective.

    Generally things begin thusly: Someone publishes in “Bubba’s Medical Journal of Peer Reviewed and Totally True Amazing Science” an observational study indicating that e.g. Deep Pockets Inc.’s mountain/spring/rain water causes cancer. Thereafter ads are run looking for Deep Pockets’ victims. Several poor, desperate souls answer the call and the one with the most empathetic story is chosen as lead plaintiff. I depose his/her expert and ask essentially “Aren’t you really just betting on horses after the race has already been won?” I point out that the expert gets most of his money from lawyers, that his “discoveries” parallel the financial motives of his masters, that his methods are just an ensemble of every QRP imaginable, and then demonstrate that he is either ignorant of the philosophical foundations of probability and statistics or using common misunderstandings about them to propagate a lie. For my troubles I am accused in closing arguments of doing a pitiably poor and obviously transparent job of trying to emulate Big Tobacco’s manufacturing of doubt.

    Sometimes I lose but usually I win. In one case the other side failed to strike a biostatistician on the jury panel; likely because she expressed anti-corporate views during voir dire. I suspected that she’d hate people who lie with statistics more than those who try to get rich selling tap water to hipsters. That time I was right.

    Anyway, the claim that someone who pushes back against a public health panic by questioning the underlying methodolgy must be a practitioner of the dark arts of tobacco denial is in my view just another way of trying to shut down conversations via the ad hominem fallacy; and as such is a very bad business for those in the business of seeking truth.

    • > other side failed to strike a biostatistician on the jury panel;
      I thought that would never happen ;-)

      Also agree, background motivations are important to be aware of, but with that in mind the business of seeking truth should (must) facilitate all views be adequately heard.

  6. I found the model in this paper to be a helpful and informative exercise in theory building, though one’s mileage may vary for such judgments. But I’m commenting to inquire about a side issue raised above: Is there any principled reason why the absolute number of free parameters, per se, should influence our evaluation? If there are two models, ceteris paribus, I get why we’d prefer the model with fewer free parameters. But in evaluating the model, isn’t it the ratio of parameters to data points that matters? That’s my understanding of AIC and BIC. I would welcome anyone’s clarifications on that.

  7. Daniel Lakeland wrote:

    And I pointed out that I felt GCMs were pretty much in this boat as they obviously are flexible enough to potentially model Mars, Venus, Jupiter, Neptune, extrasolar planets, and by the way Earth or any number of close to earth but not earth planets.

    Does this really exist? Id love to see one of these models applied to venus/mars/moon data.

Leave a Reply

Your email address will not be published. Required fields are marked *