All the things we have to do that we don’t really need to do: The social cost of junk science

I’ve been thinking a lot about junk science lately. Some people have said it’s counterproductive or rude of me to keep talking about the same few examples (actually I think we have about 15 or so examples that come up again and again), so let me just speak generically about the sort of scientific claim that:
– is presented as having an empirical basis—
– but where the empirical support comes from a series of statistical analyses—that is, no clear pattern in any individual case but only in averages—
– where the evidence is a set of p-values that are subject to forking paths—
– and where a design analysis suggests large type M and type S errors—
– where replications are nonexisting, or equivocal, or clearly negative—
– where theory is general enough that it can support empirical claims from any direction—
– and the theory has some real-world implication on how we do or should live our lives—
– and the result is one or more publications in prestigious general-science or field journals—
– with respectful publicity by the likes of NPR and the New York Times—
– and out-and-out hype by the likes of Ted, Gladwell, and Freakonomics.

Before going on, let me emphasize that science is not a linear process, and mistakes will slip in, even under best practices. There can always be problems with data collection and analysis, and conditions can change, so that an effect can be present today but not next year. And even a claim that is supported by poor research can still happen to be correct. Also there’s no clean distinction between good science and junk science.

So here I’m talking about junk science that, however it was viewed by its practitioners when it was being done, can in retrospect be viewed as flawed, with those flaws being inherent in the original design and analysis. I’m not just talking about good ideas that happened not to work out, and I recognize that even bad ideas can stimulate later work of higher quality.

In our earlier discussions of the consequences of junk science, we’ve talked about the waste of resources as researchers pursue blind alleys, going in random directions based on their overinterpretation of chance patterns in data; we’ve talked about the waste of effort of peer reviewers, replicators, etc.; we’ve talked about the harm done by people trying therapies that don’t work, and also the opportunity cost of various good ideas that don’t get tried out because they’re lost in the noise, crowded out by the latest miracle claim.

On the other side, there’s the idea that bad science could still have some positive effects: fake miracle cures can still give people hope; advice on topics such as “mindfulness” could still motivate people to focus on their goals, even if the particular treatments being tried are no better than placebo; and, more generally, the openness to possible bad ideas also allows openness to unproven but good new ideas.

So it’s hard to say.

But more recently I was thinking about a different cost of junk science, which is as a drag on the economy.

The example I was thinking about in particular was an argument by Ujjval Vyas, which seemed plausible to me, that there’s this thing called “evidence-based design” in architecture which, despite its name, doesn’t seem to have much to do with evidence. Vyas writes:

The field is at such a low level that it is not worth mentioning in many ways except that it is deeply embedded in a $1T industry for building and construction as well as codes and regulations based on this junk. . . .

And here’s from the wikipedia page on evidence-based design:

The Evidence Based Design Accreditation and Certification (EDAC) program was introduced in 2009 by The Center for Health Design to provide internationally recognized accreditation and promote the use of EBD in healthcare building projects, making EBD an accepted and credible approach to improving healthcare outcomes. The EDAC identifies those experienced in EBD and teaches about the research process: identifying, hypothesizing, implementing, gathering and reporting data associated with a healthcare project.

So this is a different cost from those discussed before. It’s a sort of tax that goes into every hospital that gets built. Somebody, somewhere, has to pay for these Evidence Based Design consultants, and somebody has to pay extra to build the buildings just so.

For any particular case, the advice probably seems innocuous. For example, “People recover from surgery faster if they have a view of nature out the window.” Sure, why not? Nature’s a good thing. But add up this and various other rules and recommendations and requirements, and it sounds like you’re driving up the cost of the building, not to mention the money and effort that gets spent filling out forms, paying the salaries and hotel bills of consultants, etc. It’s a kind of rent-seeking bureaucracy, even if all or many of the people involved are completely sincere.

OK, I don’t know anything about evidence-based design in architecture so maybe I’m missing a few tricks in this particular example. I still think the general point holds, that one hidden cost of junk science is that it fills the world with a bureaucracy of consultants and payoffs and requirements.

47 thoughts on “All the things we have to do that we don’t really need to do: The social cost of junk science

  1. Would you differentiate this as having a different cost than general trends having nothing to do with junk science, such as scrum?

    Although now that I’m mention it, I’m sure there’s been some studies done on the effectiveness of scrum, so there’s one more way it adds a new cost…

        • Yes, the framework has been around for awhile at this point. But it’s my understanding that there’s been a huge rise in adoption in the past few years. On the other hand, I’ve only be interested in software development in the past few years, so there’s a confounder in my perception.

        • Software development​ is particularly prone to fads. Some buzzwords: patterns, pair programming, agile development, scrum, CASE, etc. The scientific basis for any of them is slim to none. It’s also a major case of interactions between prevailing conditions and effectiveness.

        • Daniel:

          It’s the same way with teaching. Programming, like teaching, is something we have to do, and something that we evidently are doing inefficiently, as we see with every student who’s done all the homework assignments but remains bewildered, or every program that we struggle to rewrite. So we’re highly motivated to find tricks to improve our efficiency, and we’ll try just about anything.

        • The thing is we know how to solve this problem you just get someone who is brilliant and extremely motivated to solve the problem, and then they do. The real issue is that this doesn’t scale, and the people are hard to find. I think moocs for example are exactly an attempt to make this method scale for teaching.

        • I think this subthread points to a common reasoning error (I’m sure it has a name that I am presently forgetting) that assumes that problems/issues are specific and local to a particular field or situation, when in fact they are very general. If it looks like a special problem in science, in programming, in teaching, in… wait, all the fields I know well… then it’s probably not a unique problem. The bias is all over the place, and it’s been bugging me a lot recently.

          I think the error pops up in this sentence from AG’s post: “one hidden cost of junk science is that it fills the world with a bureaucracy of consultants and payoffs and requirements.”

          The logic here suggests that, but for junk science, we’d be rid of consultants and payoffs and requirements. Surely false. Junk science is a symptom, not a cause.

        • Jason:

          Sure, but in the particular example discussed in the above post, it does seem that if the concept of “evidence-based architecture” had never been invented, we’d have saved ourselves many millions of dollars in transfer payments.

        • Andrew:

          I see what you are talking about, but I just don’t know enough about how architecture was before it had to be “evidence-based” to judge where we’d be without it. I could imagine, for instance, that the same consulting money was being paid to someone else who claimed some other sort of authority. If it were a new sort of inefficiency, wouldn’t the price of building have gotten much higher? I don’t know enough to say.

          People have been using dowsing rods for a while. The problem with the ADE 651 electronic dowsing rod (https://en.wikipedia.org/wiki/ADE_651) was not really the “innovation” to claim it worked by NMR.

        • Jason: Let’s just rephrase it to “were it not for rent-seeking, asymmetrical information exploiting, amoral bastard quacks”

          What all of this comes down to is that people are willing to seek out financial / economic gain that doesn’t involve producing a real service which in the absence of asymmetrical information would be obvious to the “buyer”. There are two sub-populations:

          1) People who carefully avoid too much introspection and therefore claim some kind of ignorance of how much BS they are polluting the world with. cf Wansink etc

          2) People who just don’t care cf. Martin Shkreli

          Of these two populations, (2) is easier to identify and combat, but (1) does a lot of damage by virtue of being pretty darn widespread.

        • Jason:

          Could be. I bet it’s a bit of both: as we say in social science, the elasticity is somewhere between 0 and 1. On one hand, sure, there will always be rent-seekers; on the other hand, each new scam brings out a new batch of people to pay.

        • Jason: “The logic here suggests that, but for junk science, we’d be rid of consultants and payoffs and requirements.”

          I think there’s a logical fallacy here. I believe AG was implying that junk science adds to useless consultants, etc., but surely is not the only cause of these issues. Plenty of unnecessary layers of bureaucracy come about without the use of NHST :)

        • +1 to everyone’s comments. I’m just highlighting (and I think I’ve seen each of you say before) that people are really really “highly motivated to find tricks to avoid” uncertainty, feelings of ineffectiveness, decision-making responsibility, etc. I’ve become increasingly pessimistic about the good effect we can have by exposing any one of those tricks as a fraud, for the same reason that saying “no p-values” doesn’t solve the underlying problems that created their modern use.

        • I agree with you that the underlying problem is more important than exposing any one or small number of tricks. I’m not sure what to actually do, but there’s a serious problem of survivorship bias. Doing good science leads to being kicked out of science due to being too slow to create major new discoveries etc compared to the Wansinks of the world. If you want a recipe for decline and fall of civilization, it’s pretty much punish the creators and reward the rent-seekers.

        • Scrum is adopted by private sector firms because they think it is cost effective for their business. It isn’t mandated by regulatory or administrative authorities. It doesn’t need to be proven to be effective because firms are free to adopt it or not and can switch to something better whenever they wish. Its also not what I would call a fad, but rather is consistent with a multi decade trend toward rapid development using relatively small teams.

          I’m not an expert on scrum, but the general topic is embedding junk science into administrative and regulatory requirements for public projects. Love it or hate it, its isn’t embedding itself into the costs of public goods (hospitals). “It’s a sort of tax that goes into every hospital that gets built. Somebody, somewhere, has to pay for these Evidence Based Design consultants, and somebody has to pay extra to build the buildings just so.”

          It isn’t like rent seekers can just demand a seat at there table. They are muscling in on the turf of Architects and Civil Engineers. You might wonder about how an architect is supposed to know how to build a hospital, but they are already on top of it. American College of Healthcare Architects http://www.healtharchitects.org

          And these organizations are also subject to antitrust scrutiny. Scrum has two competing groups that provide training. And yea — there is this: http://www.ashe.org. American Society for Healthcare Engineering. By the way, I had never heard of either of these, but was reasonably confident that Architects would have a specialized professional interest group. And then guessed that the engineers would have one also.

          After all, 17% of GDP goes into Healthcare. And when I go into a health facility, they tend to be newer, better built, and look expensive.

          Since they used the term ‘evidence based’ … it triggering.

        • Greg Wilson (formerly of Software Carpentry) was looking into the efficacy of these software development methods for a quite a while. From brief Twitter snippets, I believe his main conclusion (or at least suspicion) was that *any* method applied with some degree of consistency and discipline would yield high-quality code over no method at all, but that the exact details of the method were inconsequential. He and Andy Oram compiled a book summarizing what little research there is out there (which I haven’t quite found the time to really dig into, but which others might find of interest):

          https://www.amazon.com/Making-Software-Really-Works-Believe/dp/0596808321/ref=sr_1_1?ie=UTF8&qid=1496711474&sr=8-1&keywords=Greg+Wilson

        • With respect to MOOCs, my main gripe with them is that they are effectively just shifting the lecture format from physical classrooms to video, without addressing many of the underlying criticisms of the lecture (such as those made by Abraham Flexner in *1905*). I’ve arrived at the conviction that teaching is better accomplished through more hands-on, smaller-scale workshops (or “on-the-job” experience where knowledge can be contextualized) and that there should be more institutions dedicated to research alone for individuals who don’t necessarily want to teach.

  2. Junk science is probably much more ubiquitous than a dozen or so examples. Here is the view of a prominent historian of science, Rom Harre, writing in his book Pavlov’s Dog and Schrodinger’s Cat (p. 146), “Philosophers have puzzled over the question of the relation between locally obtained items of evidence and the credence they give to related hypotheses. Many have tried to formalize the relationship in terms of ratios of favourable and unfavourable items of evidence to the scope of the generalization they support or undermine. Most of this looks pretty unrealistic when one pays close attention to how scientific research is actually carried out. Over and over again important results are derived from a few or even just one exemplar.” But then, few philosophers are statisticians or possess statistical training despite whatever lip service is given to it.

    If nothing else, I thought you (Andrew) would get a kick out of the fact that Harre has the word “cat” in the title.

  3. It’s recently become clear to me that one of the weaknesses of our side of the argument is that we’re not proposing something that could empower a bureaucracy and justify funding it. So we don’t have enough bureaucrats in our corner. And administrators can’t really make sense of our suggestions.

    “It won’t really cost anything and you don’t need to set up a committee. Just don’t make junk. And if junk is submitted, don’t publish it. And if you see some junk, criticize it.”
    “So where’s my role?” asks the administrator.
    “Uh, you don’t really have one. Take the day off, maybe?”
    Which is where the other side sees their opportunity, and proposes an “event” that needs a “budget” and will have an “impact”, which might lead to a “consortium” with an “overhead” … I’m not sure why I’m putting all those words in quotes …

  4. A tentative outline of our current tactics for the promotion of better quantitative research, without judgment on their (absolute or relative) effectiveness at improving practices:

    1. Science of Social Science: Wherein we discuss the virtues and pitfalls and tradeoffs of various methods in a semi-technical manner and hope that others find it useful

    2. Social Science of Science: understanding research practices and their relationship to the academic-industrial-hype complex. Recurring topics include: a) financial incentives for academics, and the measurement and evaluation of research productivity; b) quantitative training, particularly Ph.D. students; c) journal practices and incentives; d) gossip.

    3. Whack-a-Mole: Wherein we dissect (and occasionally ridicule) particularly bad and/or unlucky research papers. The specifics of one paper generally interpreted as either a) the symptom of an inter-generational degenerative disease (epistemological in nature); or b) a transparent attempt by some person(s) to gain fortune and power at the expense of truth-seeking and public service; or c) both.

    4. Rhetorical Strategies: this is where this post fits. Our discussion of what kinds of things can be done to improve research practices and how to frame and approach and argue them. One previous strategy we’ve discussed is to point to the costs of bad science to researchers. An alternative strategy would be to point to the costs of poor research to society at large. This post is sort of a test run of one part of that argument: inefficient public and private expenditure as an effect of poor scientific practice.

    • Nice list.

      With regard to 4. Rhetorical Strategies: … costs of poor research to society at large – I have added a p.s. here
      http://statmodeling.stat.columbia.edu/2017/05/24/take-two-laura-arnolds-tedx-talk/

      I would add a 5. Understand what science (right inquiry) should be.

      I would argue its an attitude rather than a collection of methods providing an accepted and credible approach that can be accredited then scaled up and used as barriers to entry. An attitude to get at reality the best we can regardless of anything else (though constrained by ethics). One thing such an attitude leads to is inclusion rather than exclusion so that you can learn how you are wrong from someone who thinks differently than you (is wrong about different things).

      Now in my experience the only way I learned what a researcher’s true attitude was involved seeing what they did when something went wrong. Interesting I had a conversation last week with someone on a Gates grant and they said they were informed that they were to report on things that went wrong or did not work. I think random audits would be better.

    • “Social Science of Science: understanding research practices and their relationship to the academic-industrial-hype complex. Recurring topics include: a) financial incentives for academics, and the measurement and evaluation of research productivity; b) quantitative training, particularly Ph.D. students; c) journal practices and incentives; d) gossip.”

      GS: Why *social* science of science? These are behavioral issues, and there is a *natural* science of behavior. BTW, what is “social” science? I mean…is there anything that makes it “social science” rather than “science” (and these aren’t scare quotes, which I often put around the word “science” meaning, of course, that said endeavor is not science at all. Note: my question is not rhetorical.

  5. In urban planning (my field) the most egregious example of this type of statistical illiteracy is seen in parking requirements. Donald Shoup wrote a nice paper about this issue (and a whole book called The High Cost of Free Parking): http://www.shoupdogg.com/wp-content/uploads/sites/10/2017/01/TruthInTransportationPlanning.pdf

    Through high amounts of required parking–based on flimsy science–cities have raised the cost of building and made driving the default way people get around. The actual costs to society are enormous, and all backed with terrible studies.

  6. I work in private industry, and one metric I’ve been asked to analyze is net promoter score (NPS). If you’ve ever answered the survey question “how likely are you to recommend X to friends and family?”, that’s an NPS survey. The original Harvard Business Review article is here: http://marketinglowcost.typepad.com/files/the-one-number-you-need-to-grow-1.pdf . The claim is that this metric is the only number firms need to focus on in order to have profitable growth. Pretty amazing, eh? Unfortunately, the original article is riddled with statistical and methodological problems, and attempts at replication have shown zero evidence for the claim. This hasn’t stopped the author from publishing at least two books on the topic, writing another couple of articles, and expanding his consultancy gig to, um, promote his net promoter score to multiple large companies (you might have heard of GE, American Express, and Microsoft). I don’t know if he’s appeared on NPR or Ted, but the foundation is clearly there. And all of this is based on one seriously flawed study.

    • I’m curious – can you expand a bit on your criticisms of this study? The linked HBR article has few details so I was not able to see what errors may have been made – and the correlations between net promoters (those that would recommend a product/company minus those that would not) and growth rates were not overly convincing. But I am willing to believe that that one measure – net promoters – may, in fact, correlate well with company growth rates. And, it may correlate better than all the other horrible customer survey questions that are asked.

      It seems to me that the more serious issue is that we should not be at all surprised at the close correlation between net promoters and company growth. But correlation is not causation and that single question about referring others to this company provides no information whatsoever about what to do to increase growth. Work on increasing the net promoter score? How exactly do we operationalize that? It is simply another way of asking how do we make a company grow. So, it seems to me that the problem with this “research” is not that it is statistically unsound (though it might be) but that it is almost meaningless – a sort of tautology. Customers buy from companies they would recommend to others! Ted Talk on the way.

  7. The “social cost of junk science” is a very apt way to describe the issue that I was hoping to highlight. The core of the issue to me is, and I think defensible, that when there are limited resources, the “waste” that results from embedded junk science can be a serious problem. The conditions under which junk science can be harmless or inconsequential seem to require that no or only de minimus resources are used or redirected from other more worthwhile activity. EBD is only one in a litany of silly things that have become baked into all kinds of claimed architectural “research” and are now backed by all kinds of important organizations, companies, and professionals. Architecture, like many other disciplines, wants to enjoy the patina of “science” prestige as a way to fool potential clients, legislators, and the public but has absolutely no intention of actually questioning the quality or character of any “empirical” studies if such an examination might not comport with the confirmation biases common in the field.

    The most powerful and recent example of the social cost of junk science in my area can be seen in the use of EPA’s Energy Star rating system (with requisite claims that “scientists” made it up after doing “research”) to provide measures of building performance when in fact, the underlying algorithms and data sets lead to uncertainty ranges that can often approach 30% or are otherwise fatally flawed. See the definitive work of John Scofield, a physicist who has written extensively on the topic (https://thepragmaticsteward.com/2016/11/21/building-energy-star-scores-good-idea-bad-science-book-release/ and http://www.sciencedirect.com/science/article/pii/S037877881300529X). The problem is that Energy Star is now entrenched in all kinds of green building rating systems, codes, and governmental agency requirements. Beyond these areas, global real estate companies like JLL, CBRE, etc. now back this type of measurement and encourage the general validation of not only Energy Star but also junk science green rating systems like the USGBC which has always refused to share any underlying data on building performance. When every federal agency and most private sector real estate players accept Energy Star as a validated benchmarking system, vast sums of money are spent chasing a ghost. We may all want properly defined efficient buildings, but wasting money at this scale for nothing but good feelings is rather sad. Wouldn’t it be better to dedicate those resources to providing micronutrients to malnourished children, or some other activity with directly measurable and useful outcomes?

    At the same time, a vast array of parasites and self-interested parties tack as the wind blows (the rent-seeking bureaucracy you mention). Even if we are truly conservative and say the green rating systems requiring Energy Star modeling add only 1-3% of the total cost of many billions of sq feet of new built assets, we can begin to grasp the scope of the waste. The 1-3% is only the beginning of the compliance costs and misdirected use of funds resulting from the creation of an industry surrounding Energy Star.

    Andrew, I think you are very right to point out that things like EBD increase the overall cost of buildings and function as a kind of tax that goes unseen especially when a licensed professional can easily hide from the owner or purchaser of the services the flimsy basis of the claims. The asymmetric relation as well as the inability for the principal/owner to exercise meaningful oversight or protect him or herself from opportunistic activity by the licensed professional creates fundamental hostage taking outcomes in the transaction for design services. This explains both the ability to avoid detection of architectural junk science and its continued attractiveness to the architectural practitioner as an easy way to allow confirmation bias to become writ large. One particularly useful work, though not really about statistics, is a work by Timur Kuran that seems appropriate called Private Truths, Public Lies: the Social Consequences of Preference Falsification where he discusses the structure of group interactions and the fatal problems of publicly manifested positions and privately held beliefs (what he calls preference falsification). In particular he discusses the problem of regime change, cascades of public opinion, and voting activity. The social cost of junk science is intertwined with this problem of preference falsification, which is clearly linked to social desirability bias, since economic and political activity can be advantageously structured to take advantage of these “public lies” so as to make them permanently a part of the frictional cost of societal activity.

    • I’m curious about this because I mostly have heard of LEEDS as the standard that people advertise. I could read a google article about LEEDS vs Energy Star, but if there are competing standards, it provides at least some element of choice.

      I took a look at a firm (REIT) that develops and manages distribution centers and large warehouses (and other industrial property)

      This is from their sustainability report:

      We have extensive experience constructing buildings to industry leading environmental standards like LEED (Leadership in
      Energy and Environmental Design) and the California Green Building Standards Code (CALGreen Code). In Southern
      California, First Industrial developed First Inland Logistics Center, a 691,960 square-foot state-of-the-art cross-dock
      warehouse, to LEED standards. Our First Chino Logistics Center and First Bandini Logistics Center are also being
      built to CALGreen Code, comparable to LEED.

      I’m assuming that the California building uses the CAL Green Code because it is mandated or there is a tax incentive or required by businesses doing business with the state.

      I’m not arguing that it isn’t a problem. I don’t see anyone involved in these transaction being extreme tree huggers. I suppose that any standard that tries to apply to all industrial real estate is not going to optimize everything. Tenants like newer stuff because it has upgraded HVAC, electricity, Dock and doors are semi standardized, ceiling heights meet current standard expectations and are more uniform, &c.

      They are big enough to push back on standards or ignore a part that seems overly wasteful. Not to mention their tenants sign multi year leases and have much larger and sharper elbows. Like Amazon, for one.

      If it is a standard instead of bespoke, it can’t be optimal. And maybe some of their customers need to signal their seriousness regarding sustainability and want a certification regrdless. So I don’t see how you can eliminate all waste.

      On the other hand, there is some competition between certifying organizations, a lot of it is voluntary, and it seems directionally correct. LED lighting an newer HVAC should pay for themselves. Or is the issue more serious? Without some competitive pressures, I have a feeling it would be a lot worse.

      • Tom, unfortunately the problem is much more serious. The claim in much of the green building industry is that it is voluntary and backed by “science”, but that is only on the surface and part of the marketing hype. Large orgs in this area, even real estate giants, have long ago been captured by advocates in attempts to stay “current.” Companies seeking to gain some ostensible market advantage often look to marketing themselves as green. The classic case is the product in the building industry that costs too much or is lower performing but is discovered to be “green”. These companies now launch marketing and lobbying campaigns to force everyone to be green so that their product can becomes competitive all the while getting the help of the architectural establishment. The increased price is paid by consumers who are now forced to buy a product that is both higher cost and lower functionality but is green. One of the best examples would be to look at the WELL Building system (which grew out of the USGBC) and the orgs and people behind this (http://delos.com/services/programs/well-building-standard). I spent a decade in the bowels of this world and it has all the charm of “alternative medicine” for the public, but all the dangers as well. Think of USGBC as the GNC of this realm. USGBC, WELL, EBD for healthcare are all interlinked since “wellness” and health are interchangeable in this discourse. Harvard, Mayo, and others are all too happy to jump on this bandwagon and thus can produce endless press releases and marketing fodder to the public. What layperson would dare to question anything that came out of Harvard’s T. H. Chan School of Public Health which has a whole program on Healthy Buildings (http://forhealth.org/). Who would be against healthy buildings?

        It is more important to keep the general proposition in mind than get caught up in this bizarre sub-culture that unfortunately has an impact on the total cost borne by society for all the design, construction, operation, and maintenance of built assets. There are many similarities between the social cost of junk science and the social cost of useless or vested interest regulation. Both are easily captured since both provide easy answers for public consumption that is further established or validated by the use of “studies.” These studies get major play in the right places which are then used to effectuate policy options. Looking more closely at the nature of the study or the data or the methodology quickly becomes a fool’s errand for those wishing to protect their careers or a great boon to the careerists willing to go along for the ride and, as is often pointed out in this blog and comments, doesn’t even require bad actors. LEDs, HVAC systems, tighter building envelopes are all much more complicated than they appear at first glance. LEDs do have a very good business case, but much of this increased efficiency suffers for Jevon’s Paradox and there has been no decrease in overall energy use by buildings. As an engineering colleague of mine often puts it, we are becoming experts at wasting energy more efficiently.

        Remember that things like heat produced by traditional lighting decreases the heating load while increasing the cooling load so the use of LED lighting may be more or less advantageous depending on regional variations in climate characteristics. A cool (white) roof can be useful in some regions and detrimental in others depending on the moisture migration characteristics of the envelope elements and can lead to mold growth or other moisture related problems. A green roof is a hotbed of all kinds of dangerous bio-aerosols that can have a very detrimental impact on vulnerable populations as well as rodent infestations. The difference in cost for a triple-glazed window instead of a double-glazed is very significant. It would serve the purposes of the triple-glazed window manufacturers, suppliers, and installers, under the guise of well-intentioned energy efficiency increases, to raise the standards and get into code “R-value” minimums that make a triple-glazed option necessary. Architects demand that hospitals get designed with no PVC (PVC is evil don’t you know and made by evil chemical companies) but virtually all blood bags are made of the stuff and most electrical cabling is sheathed in the stuff because of fire-resistant qualities.

        The problem of junk science is ultimately an ethical one. Statistics is also at its core an ethical problem. Both junk science and statistics are epistemological problems that pose signal challenges to anyone that wants to know about the empirical realm without fooling him or herself. This necessitates a fundamental skepticism that is often at odds with competing values.

  8. “– is presented as having an empirical basis—
    – but where the empirical support comes from a series of statistical analyses—that is, no clear pattern in any individual case but only in averages—
    – where the evidence is a set of p-values that are subject to forking paths—
    – and where a design analysis suggests large type M and type S errors—
    – where replications are nonexisting, or equivocal, or clearly negative—
    – where theory is general enough that it can support empirical claims from any direction—
    – and the theory has some real-world implication on how we do or should live our lives—
    – and the result is one or more publications in prestigious general-science or field journals—
    – with respectful publicity by the likes of NPR and the New York Times—
    – and out-and-out hype by the likes of Ted, Gladwell, and Freakonomics.”

    If they are making the same mistakes in the same ways, then maybe there is a way to develop a process to review the paper’s methodology that is highly efficient, get the review published, and send it to the NYT, NPR, and Freakonomics. Maybe use students for part of the work. Clerical personnel for some of it. Rank the potential damage and target the most costly.

    I’m not endorsing the proposed process as much as suggesting that although it seems most logical to deal with a general problem with a general solution — perhaps a bottom up, brute force approach of attacking a meaningful number of them, one at a time, would be more effective.

    And I suppose you could expect a ‘form letter’ type rebuttal. But if researchers knew they would be potentially subject to challenge on every paper — the critique would get some traction. This is primarily a response to my impression of the relative uniformity of the issues.

    • I find both theories pretty plausible, but then I understand where both Wilson and Wolfe are coming from. Wilson likes nature (especially bugs) and Wolfe likes owning expensive stuff and feeling high status.

  9. Speaking of evidence-based architecture: Tom Wolfe’s theory is (from “I Am Charlotte Simmons”): “… the existence of conspicuous consumption one has rightful access to — as a student had rightful access to the fabulous Dupont Memorial Library — creates a sense of well-being.”

Leave a Reply

Your email address will not be published. Required fields are marked *