I just read “Lying for Money: How Legendary Frauds Reveal the Workings of Our World,” by Dan Davies.
I think the author is the same Dan Davies who came up with the saying, “Good ideas do not need lots of lies told about them in order to gain public acceptance,” and also the “dsquared” who has occasionally commented on this blog, so it is appropriate that I heard about his book in a blog comment from historian Sean Manning.
As the title of this post indicates, I’m mostly going to be talking here about the differences between frauds in three notoriously fraud-infested but very different fields of human endeavor: science, sports, and business.
But first I wanted to say that this book by Davies is one of the best things about economics I’ve ever read. I was trying to think what made it work so well, and I realized that the problem with most books about economics is that they’re advertising the concept of economics, or they’re fighting against dominant economics paradigms . . . One way or another, those books are about economics. Davies’s book is different in that he’s not saying that economics is great, he’s not defensive about economics, and he’s not attacking it either. His book is not about about economics; it’s about fraud, and he’s using economics as one of many tools to help understand fraud. And then when he gets to Chaper 7 (“The Economics of Fraud”), he’s well situated to give the cleanest description I’ve ever seen of economics, integrating micro to macro in just a few pages. I guess a lot of readers and reviewers will have missed that bit because it’s not as lively as the stories at the front of the book, also, who ever gets to Chapter 7, right?, and that’s kinda too bad. Maybe Davies could follow up with a short book, “Economics, what’s it all about?” Probably not, though, as there are already a zillion other books of this sort, and there’s only one “Lying for Money.” I’m sure there are lots of academic economists and economics journalists who understand the subject as well or better than Davies; he just has a uniquely (as far as I’ve seen) clear perspective, neither defensive nor oppositional but focused on what’s happening in the world rather than on academic or political battles for the soul of the field. (See here and here for further discussion of this point.)
Cheating in business
Cheating in business is what “Lying for Money” is all about. Davies mixes stories of colorful fraudsters with careful explanations of how the frauds actually worked, along with some light systematizing of different categories of financial crime.
In his book, Davies does a good job of not blaming the victims. He does not push the simplistic line that “you can’t cheat an honest man.” As he points out, fraud is easier to commit in an environment of widespread trust, and trust is in general a good thing in life, both because it is more pleasant to think well of others and also because it reduces transaction costs of all sorts.
Linear frauds and exponential frauds
Beyond this, one of the key points of the book is that there are two sorts of frauds, which I will call linear and exponential.
In a linear fraud, the fraudster draws money out of the common reservoir at a roughly constant rate. Examples of linear frauds include overbilling of all sorts (medical fees, overtime payments, ghost jobs, double charging, etc.), along with the flip side of this, which is not paying for things (tax dodging, toxic waste dumping, etc.). A linear fraud can go on indefinitely, until you get caught.
In an exponential fraud, the fraudster needs to keep stealing more and more to stay solvent. Examples of exponential frauds include pyramid schemes (of course), mining fraud, stock market manipulations, and investment scams of all sorts. A familiar example is Bernie Madoff, who raised zillions from people by promising them unrealistic returns on their money, but as a result incurred many more zillions of financial obligations. The scam was inherently unsustainable. Similarly with Theranos: the more money they raised from their investors, the more trouble they were in, given that they didn’t actually ever have a product. With an exponential fraud you need to continue expanding your circle of suckers—once that stops, you’re done.
A linear fraud is more sustainable—I guess the most extreme example might be Mister 880, the counterfeiter of one-dollar bills who was featured in a New Yorker article many years ago—but exponential frauds can grow your money faster. Embezzling can go either way: in theory you can sustainably siphon off a little bit every month without creating noticeable problems, but in practice embezzlers often seem to take more money than is actually there, giving them unending future obligations to replace the missing funds.
With any exponential fraud, the challenge is to come up with an exit strategy. Back in the day, you could start a pyramid scheme or other such fraud, wait until a point where the scam had gone long enough that you had a good profit but before you reach the sucker event horizon, and then skip town. The only trick is to remember to jump off the horse before it collapses. For business frauds, though, there’s a paper trail, so it’s harder to leave without getting caught. The way Davies puts it is that in your life you have one chance to burn your reputation in this way.
Another way for a fraudster to escape, financially speaking, is to go legit. If you’re a crooked investor, you can take your paper fortune to the racetrack or the stock market and make some risky bets: if you win big, you can pay off your funders and retire. Unfortunately, if you win big, and you’re already the kind of person to conduct an exponential fraud in the first place, it seems likely you’ll just take this as a sign that you should push further. Sometimes, though, you can keep things going indefinitely by converting an exponential into a linear scheme, as seems to have happened with some multilevel modeling operations. As Davies says, if you can get onto a stable financial footing, you have something that could be argued was never a fraud at all, just a successful business that makes its money by convincing people to pay more for your product than it’s worth.
The final exit strategy is recidivism, or perhaps rehabilitation. Davies shares many stories of fraudsters who got caught, went to prison, then popped out and committed similar crimes again. They kept doing what they were
good at! Every once in awhile you see a fraudster who managed to grease enough palms that after getting caught he could return to life as a rich person, for example Michael Milken.
One other thing. Yes, exponential frauds are especially unsustainable, but linear frauds can be tricky to maintain too. Even if you’re cheating people at a steady, constant rate, so you have no pressing need to raise funds to cover your past losses, you’re still leaving a trail of victims behind, and any one of them can decide to be the one to put in the effort to stop you. More victims = greater odds of being tracked down. There’s all sorts of mystique about “cooling off the mark,” but my impression that the main way that scammers get away with their frauds is by maintaining some physical distance from the people they’ve scammed, and by taking advantage of the legal system to make life difficult for any whistleblowers or victims who come after them. Again, see Theranos.
Cheating in science
Science fraud is a mix of linear and exponential. The linear nature of the fraud is that it’s typically a little bit in paper after paper, grant proposal after grant proposal, Ted talk after Ted talk, a lie here, an exaggeration there, some data manipulation, some p-hacking, at each time doing whatever it takes to get the job done. The fraud is linear in that there’s no compounding; it’s not like each new research project requires an ever-larger supply of fake data to make up for what was taken last time.
On the other hand, there’s a potentially exponential problem that, if you use fraud to produce an important “discovery,” others will want to replicate it for themselves, and when those replications fail, you’ll need to put in even more effort to prop up your original claims. In business, this propping-up can take different forms (new supplies of funds, public relations, threats, delays, etc.), and similarly there are different ways in science to prop up fake claims: you can ignore the failed replications and hope for the best, you can attack the replicators, you can use connections in the news media to promote your view and use connections in academia to publish purported replications of your own, you can jump sideways into a new line of research and cheat to produce success there . . . lots of options. The point is, fake scientific success is hydra-headed as it will spawn continuing waves of replication challenges. As with financial fraud, the challenge, after manufacturing a scientific success, is to draw a line under it, to get it accepted as canon, something they can never take away from you.
Cheating in sports
Lance Armstrong is an example of an exponential fraud. He doped to win bike races—apparently everybody was doping at the time. But Lance was really really good at doping. People started to talk, and then Lance had to do more and more to cover it up. He engaged in massive public relations, he threatened people, he tried to wait it out . . . nothing worked. Dude is permanently disgraced. It seems that he’s still rich, though: according to wikipedia, “Armstrong owns homes in Austin, Texas, and Aspen, Colorado, as well as a ranch in the Texas Hill Country.”
Other cases of sports cheating have more of a linear nature. Maradona didn’t have to keep punching balls into the net; once was enough, and he still got to keep his World Cup victory. If Brady Anderson doped, he just did it and that was that; no escalating behavior was necessary.
Cheating in journalism
Journalists cheat by making things up in the fashion of Mike Barnicle or Jonah Lehrer, or by reporting stories that originally appeared elsewhere without crediting the original source, which I’ve been told is standard practice at the New York Times and other media outlets. Reporting an already-told story without linking to the source is considered uncool in the blogging world but is so common in regular journalism that it’s not even considered cheating! Fabrication, though, remains a bridge too far.
Overall I’d say that cheating in journalism is like cheating in science and sports in largely being linear. Every instance of cheating leaves a hostage to fortune, so as you continue to cheat in your career, it seems likely you’ll eventually get found out for something or another, but there’s no need for an exponential increase in the amount of cheating in the way that business cheaters need to recoup larger and larger losses.
The other similarity of cheating in journalism to cheating in other fields is the continuing need for an exit strategy, with the general idea being to build up reputational credit during the fraud phase that you can then cash in during the discovery phase. That is, once enough people twig to your fraud, you are already considered too respectable or valuable enough to dispose of. Mike Barnicle is still on TV! Malcolm Gladwell is still in the New Yorker! (OK, Gladwell isn’t doing fraud, exactly: rather than knowingly publishing lies, he’s conveniently putting himself in the position where he can publish untrue and misleading statements while setting himself in some sort of veil of ignorance where he can’t be held personally to blame for these statements. He’s playing the role of a public relations officer who knows better than to check the veracity of the material he’s being asked to promote.)
Art fraud
I don’t have anything really to say about cheating in art, except that it’s a fascinating topic and much has been written about it. Art forgery involves some amusing theoretical questions, such as: if someone copies a painting or a style of a no-longer-living artist so effectively that nobody can tell the difference, is anyone harmed, other than the owners of existing work whose value is now diluted? From a business standpoint, though, art forgery seems similar to other forgery in being an essentially linear fraud, again leading to a linearly increasing set of potentially incriminating clues.
Closely related to art fraud is document fraud, for example the hilarious and horrifying (but more hilarious than horrifying) gospel of Jesus’s wife fraud, and this blurs into business fraud (the documents are being sold) and science fraud (in this case, bogus claims about history).
Similarities between cheating in business, science, sports, and journalism
Competition is a motivation for cheating. It’s hard to compete in business, science, sports, and journalism. Lots of people want to be successes and there aren’t enough slots for everyone. So if you don’t have the resources or talent or luck to succeed legitimately, cheating is an alternative path. Or if you are well situated for legitimate success, cheating can take you to the next level (I’m looking at you, Barry Bonds).
Cheating as a shortcut to success, that’s one common thread in all these fields of endeavor. There’s also cheating in politics, which I’m interested in as a political scientist, but right now I’m kinda sick of thinking about lying cheating political figures—this includes elected officials but also activists and funders (i.e., the bribers as well as the bribed)—so I won’t consider them here.
Another common thread is that you’re not supposed to cheat, so the cheater has to keep it hidden, and sometimes the coverup is, as they say, worse than the crime.
A final common thread is that business, science, sports, journalism, and art are . . . not cartels, necessary, but somewhat cooperative enterprises whose participants have a stake in the clean reputation of the entire enterprise. This motivates them to look away when they see cheating. It’s unpleasant, and it’s bad all around for the news to spread, as this could lead to increased distrust of the entire enterprise. Better to stick to positivity.
Differences
The key difference I see between these different areas is that in business it’s kinda hard to cheat by accident. In science we have Clarke’s Law: Any sufficiently crappy research is indistinguishable from fraud. In business or sports we wouldn’t say that. OK, there might be some special cases, for example someone sells tons of acres of Florida swampland and is successful because he (the salesman) sincerely thinks it’s legitimate property, but in general I think of business frauds as requiring something special, some mix of inspiration, effort, and lack of scruple that most of us can’t easily assemble. A useful idiot might well be useful as part of a business fraud, but I wouldn’t think that ignorance would be a positive benefit.
In contrast, in research, a misunderstanding of scientific method can really help you out, if your goal is to produce publishable, Gladwell-able, Freakonomics-able, NPR-able, Ted-able work. The less you know and the less you think, the further you can go. Indeed, if you approach complete ignorance of a topic, you can declare that you’ve discovered an entire new continent, and a pliable news media will go with you on that. And if you’re clueless enough, it’s not cheating, it’s just ignorance!
In this dimension, sports and art seem more like business, and journalism seems more like science. Yes, you can cheat in sports without realizing it, but knowing more should allow you to be more effective at it. I can’t think of a sporting equivalent to those many scientists who produce successful lines of research by wandering down forking paths, declaring statistical significance, and not realizing what they’ve been doing.
With journalism, though, there’s a strong career path of interviewing powerful people and believing everything they say, never confronting them. To put it another way, there’s only one Isaac Chotiner, but there are lots and lots of journalists who deal in access, and I imagine that many of them are sincere, i.e. they’re misleading their readers by accident, not on purpose.
Other thoughts inspired by the book Lying for Money
I took notes while reading Davies’s book. Page references are to the Profile Books paperback edition.
p.14, “All this was known at the time.” This comes up again on p.71: “At this point, the story should have been close to its conclusion. Indeed, the main question people asked in 1982, when OPM finally gave up and went bankrupt, is why didn’t it happen three years earlier? Like a Looney Toons character, nothing seemed to stop it. New investors were brought in as the old ones gave up in disgust.” This happens all the time; indeed it was one of the things that struck me about the Theranos story was how the company thrived for nearly a decade, after various people in the company realized the emptiness of the company’s efforts.
A fraud doesn’t stay afloat all by itself; it takes a lot of effort to keep it going. This effort can include further lies, the judicious application of money, and, as with Theranos, threats and retaliation. It’s a full-time job! Really there’s no time to make up the losses or get the fictional product to work, given all the energy being spent to keep the enterprise alive for years after the fact of the fraud is out in the open.
p.17, “Fraudsters don’t play on moral weaknesses, greed or fear; they play on weaknesses in the system of checks and balances.” I guess it’s a bit of both, no? One thing I do appreciate, though, is the effort Davies puts in to not present these people as charming rogues.
I want to again point to a key difference between fraud in business and fraud in science. Business fraud requires some actual talent, or at least an unusual lack of scruple or willingness to take risks, characteristics that set fraudsters apart from the herd. In contrast, scientific misconduct often just seems to require some level of stupidity, enough so that you can push buttons, get statistical results, and draw ridiculous conclusions without looking back. Sure, ambition and unscrupulousness can help, but in most cases just being stupid seems like enough, and also is helpful in the next stage of the process when it’s time to non-respond to criticism.
p.18, “Another thing which will come up again and again is that it is really quite rare to find a major commercial fraud which was the fraudster’s first attempt. An astonishingly high proportion of the villains of this book have been found out and even served prison time, then been places in positions of trust once again.” I’m reminded of John Gribbin and John Poindexter.
Closer to home, there was this amazing—by which I mean amazingly horrible—story of a public school that was run jointly by the New York City Department of Education and Columbia University Teachers College. The principal of this school had some issues. From the news report:
In 2009 and 2010, while Ms. Worrell-Breeden was at P.S. 18, she was the subject of two investigations by the special commissioner of investigation. The first found that she had participated in exercise classes while she was collecting what is known as “per session” pay, or overtime, to supervise an after-school program. The inquiry also found that she had failed to offer the overtime opportunity to others in the school, as required, before claiming it for herself.
The second investigation found that she had inappropriately requested and obtained notarized statements from two employees at the school in which she asked them to lie and say that she had offered them the overtime opportunity.
After those findings, we learn, “She moved to P.S. 30, another school in the Bronx, where she was principal briefly before being chosen by Teachers College to run its new school.”
So, let’s get this straight: She was found to be a liar, a cheat, and a thief, and then, with that all known, she was hired to two jobs as school principal?? An associate vice president of Teachers College said, “We felt that on balance, her recommendations were so glowing from everyone we talked to in the D.O.E. that it was something that we just were able to live with.” In short: once you’re plugged in, you stay plugged in.
p.47: Davies talks about how online drug dealers eventually want to leave the stressful business of drug dealing, and at this point they can cash in their reputation by taking a lot of orders and then disappearing with customers’ money. An end-of-career academic researcher can do something similar if they want, using an existing reputation to promote bad ideas. Usually though you wouldn’t want to do that, as there’s no anonymity so the negative outcome can reflect badly on everything that came before. The only example I can think of offhand is the Cornell psychology researcher Daryl Bem, who is now indelibly associated with some very bad papers he wrote on extra-sensory perception. I was also gonna include Orson Welles here, as back in the 1970s he did his very best to cash in his reputation on embarrassing TV ads. But, decades later, the ads are just am amusing curiosity and Orson’s classic movies are still around: his reputation survived just fine.
p.50: “When the same features of a system keep appearing without anyone designing them, you can usually be pretty sure that the cause is economic.” Well put!
p.57: Regarding Davies’s general point about fraud preying upon a general environment of trust, I want to say something about the weaponization of trust. An example is when a researcher is criticized for making scientific errors and then turns around, in a huff, and indignantly says he’s being accused of fraud. The gambit is to move the discussion from the technical to the personal, to move from the question of whether there really is salad oil on those tanks to the question of whether the salad oil businessman can be trusted.
p.62: Davies writes, “fraud is an unusual condition; it’s a ‘tail risk.'” All I can say is, fraud might be an unusual “tail risk” in business, but in science it’s usual. It happens all the time. Just in my own career, I had a colleague who plagiarized; another one who published a report deliberately leaving out data that contradicted the story he wanted to tell; another who lied, cheated, and stole (I can’t be sure about that one as I didn’t see it personally; the story was told to me by someone who I trust); another who smugly tried to break an agreement; and another who was conned by a coauthor who made up data. That’s a lot! It’s two cases that directly affected me and three that involved people I knew personally. There was also Columbia faking its U.S. News ranking data; I don’t know any of the people involved but, as a Columbia employee, I guess that I indirectly benefited from the fraud while it was happening.
I’d guess that dishonesty is widespread in business as well. So I think that when Davies wrote “fraud is an unusual condition,” he really meant that “large-scale fraud is an unusual condition”; indeed, that would fit the rest of his discussion on p.62, where he talks about “big systematic fraud” and “catastrophic fraud loss.”
This also reminds me of the problems with popular internet heuristics such as “Hanlon’s razor,” “steelmanning,” and “Godwin’s law,” all of which kind of fall apart in the presence of actual malice, actual bad ideas, and actual Nazis. The challenge is to hold the following two ideas in your head at once:
1. In science, bad work does not require cheating; in science, honesty and transparency are not enough; just cos I say you did bad work it doesn’t mean I’m accusing you of fraud; just cos you followed the rules as you were taught and didn’t cheat it doesn’t mean you made the discovery you thought you did.
2. There are a lot of bad guys and cheaters out there. It’s typically a bad idea to assume that someone is cheating, but it’s also often a mistake to assume that they’re not.
p.65: Davies refers to a “black hole of information.” I like that metaphor! It’s another way of saying “information laundering”: the information goes into the black hole, and when it comes out its source has been erased. Traditionally, scientific journals have functioned as such a black hole, although nowadays we are more aware that, even if a claim has been officially “published,” it should still be possible to understand it in the context of the data and reasoning that have been used to justify it.
As Davies puts it on p.71, “People don’t check up on things which they believe to have been ‘signed off.’ The threat is inside the perimeter.” I’ve used that analogy too! From 2016: “the current system of science publication and publicity is like someone who has a high fence around his property but then keeps the doors of his house unlocked. Any burglar who manages to get inside the estate then has free run of the house.”
p.76: “The government . . . has some unusual characteristics as a victim (it is large, and has problems turning customers away).” This reminds me of scientific frauds, where the scientific community (and, to the extent that the junk science has potential real-world impact, the public at large) is the victim. Scientific journals have the norm of taking every submission seriously; also, a paper that is rejected from one journal can be submitted elsewhere.
p.77: “If there is enough confusion around, simply denying everything and throwing counter-accusations at your creditors can be a surprisingly effective tactic.” This reminds me of the ladder of responses to criticism.
p.78: Davies describes the expression “cool out the mark” as having been “brought to prominence by Erving Goffman.” That’s not right! Cooling out the mark was already discussed in great detail in linguist David Maurer’s classic book from 1940, The Big Con. More generally, I find Goffman irritating for reasons discussed here, so I really don’t like to see him credited for something that Maurer already wrote about.
p.114: “Certain kinds of documents are only valid with an accountant’s seal of approval, and once they have gained this seal of validity, they are taken as ‘audited accounts’ which are much less likely to be subjected to additional verification or checking.” Davies continues: “these professions are considered to be circles of trust. The idea is partly that the long training and apprenticeship processes of the profession ought to develop values of trust and honesty, and weed out candidates who do not possess them. And it is partly that professional status is a valuable asset for the person who possesses it.”
This reminds me of . . . academic communities. Not all, but much of the time. This perspective helps answer a question that’s bugged me for awhile: When researchers do bad work, why do others in their profession defend them? Just to step away from our usual subjects of economics and psychology for a moment, why were the American Statistical Association and the American Political Science Association not bothered by having giving major awards to plagiarists (see here and here)? You’d think they’d be angry about getting rooked, or at least concerned that their associations are associated with frauds. But noooo, the powers that be in these organizations don’t give a damn. The Tour de France removed Lance Armstrong’s awards, but ASA and APSA can’t be bothered. Why? One answer is that they—we!—benefit from the respect given to people in our profession. To retract awards is to admit that this respect is not always earned. Better to just let everyone quietly go about their business.
On p.124, Davies shares an amusing story of the unraveling of a scam involving counterfeit Portuguese banknotes: “While confirming them to be genuine, the inspector happened to find two notes with the same serial numbers—a genuine one had been stacked next to its twin. Once he knew what to look for, it was not too difficult to find more pairs. . . .” The birthday problem in the wild!
p.126: “mining is a sector of the economy in which standards of honesty are variable but requirements for capital are large, and you can keep raising money for a long time before you have to show results.” Kind of like some academic research and tech industries! Just give us a few more zillion dollars and eventually we’ll turn a profit . . .
p.130: “The key to any certification fraud is to exploit the weakest link in the chain.” Good point!
p.131: “It’s often a very good idea to make sure that one is absolutely clear about what a certification process is actually capable of certifying . . . Gaps like this—between the facts that a certification authority can actually make sure of, and those which it is generally assumed it can—are the making of counterfeit fraud.”
This reminds me of scientific error—not usually fraud, I think, but rather the run-of-the-mill sorts of mistakes that researchers, journals, and publicists make every day because they don’t think about the gap between what has been measured and what is being claimed. Two particularly ridiculous examples from psychology are the 3-day study that was called “long term” and the paper whose abstract concluded, “That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications,” even though the reported studies had no measures whatsoever of anyone “becoming more powerful,” let alone any actionable implications of such an unmeasured quantity. Again, I see no reason to think these researchers were cheating; they were just following standard practice of making strong claims that sound good but were not addressed by their data. Given that experimental scientists—people whose job is to connect measurement to a larger reality!—regularly make this sort of mistake, I guess it’s not a surprise that the same problem arises in business.
p.134: Davies writes that medical professionals “have a long training program, a strong ethical code and a lot to lose if caught in a dishonest act.” But . . . Surgisphere! Dr. Anil Potti! OK, there are bad apples in every barrel. Also, I’m sure there’s some way these dudes rationalize their deeds. Ultimately, they’re just trying to help patients, right? They’re just being slowed down by all those pesky regulations.
p.136: Davies writes, “The thing is, the certification system for pharmaceuticals is also a safety system.” I love that “The thing is.” It signals to me that Davies didn’t knock himself out writing this book. He wrote the book, it was good, he was done, it got published. When I write an article or book, I get obsessive on the details. Not that I don’t make typos, solecisms, etc., but I’m pretty careful to keep things trim. Overall I think this works, it makes my writing easier to read, but I do think Davies’s book benefits from this relaxed style, not overly worked over. No big deal, just something I noticed in different places in the book.
p.137: “Ranbaxy Laboratories . . . pleaded guilty in 2013 to seven criminal charges relating to the generic drugs it manufactured . . . it was in the habit of using substandard ingredients and manufacturing processes, and then faking test results by buying boxes of its competitors’ branded product to submit to the lab. Ranbaxy’s frauds were an extreme case (although apparently not so extreme as to throw it of the circle of trust entirely; under new management it still exists and produces drugs today).” Whaaa???
p.145: Davies refers to “the vital element of time” in perpetuating a fraud. A key point here is that uncovering the fraud is never as high a priority to outsiders as perpetuating the fraud is for the fraudsters. Even when money is at stake, the amount of money lost by each individual investor will be less than what is at stake for the perpetuator of the fraud. What this means is that sometimes the fraudster can stay alive by just dragging things out until the people on the other side get tired. That’s a standard strategy of insurance companies, right? To delay, delay, delay until the policyholder just gives up, making the rational calculation that it’s better to just cut your losses.
I’ve seen this sort of thing before, that cheaters take advantage of other people’s rationality. They play a game of chicken, acting a bit (or a lot) crazier than anyone else. It’s the madman theory of diplomacy. We’ve seen some examples recently of researchers who’ve had to deal with the aftermath of cheating collaborators, and it can be tough! When you realize a collaborator is a cheater, you’re dancing with a tiger. Someone who’s willing to lie and cheat and make up data could be willing to do all sorts of things, for example they could be willing to lie about your collaboration. So all of a sudden you have to be very careful.
p.157: “In order to find a really bad guy at a Big Four accountancy firm, you have to be quite unlucky (or quite lucky if that’s what you were looking for). But as a crooked manager of a company, churning around your auditors until you find a bad ‘un is exactly what you do, and when you do find one, you hang on to them. This means that the bad auditors are gravitationally drawn into auditing the bad companies, while the majority of the profession has an unrepresentative view of how likely that could be.”
It’s like p-hacking! Again, a key difference is that you can do bad science on purpose, you can do bad science by accident, and there are a lot of steps in between. What does it mean if you use a bad statistical method, people keep pointing out the problem, and you keep doing it? At some point you’re sliding down the Clarke’s Law slope from incompetence to fraud. In any case, my point is that bad statistical methods and bad science go together. Sloppy regression discontinuity analysis doesn’t have to be a signal that the underlying study is misconceived, but it often is, in part because (a) regression discontinuity is a way to get statistical significance and apparent causal identification out of nothing, and (b) if you are doing a careful, well-formulated study, you might well be able to model your process more likely. Theory-free methods and theory-free science often go together, and not in a good way.
p.161: “The problem is that spotting frauds is difficult, and for the majority of investors not worth spending the effort on.” Spotting frauds is a hobby, not a career or even a job. And that’s not even getting into the Javert paradox.
p.173: “The key psychological element is the inability to accept that one has made a mistake.” We’ve seen that before!
p.200: “The easier something is to manage—the more possible it is for someone to take a comprehensive view of all that’s going on, and to check every transaction individually—the more difficult it is to defraud.” This reminds me of preregistration in science. It’s harder to cheat in an environment where you’re expected to lay out all the steps of your procedure. Cheating in that context is not impossible, but it’s harder.
p.204: Davies discusses “the circumstances under which firms would form, and how the economy would tend not to the frictionless ideal, but to be made up by islands of central planning linked by bridges of price signals.” Well put. I’ve long thought this but, without having a clear formulation in words, it wasn’t so clear to me. This is the bit that made me say the thing at the top of this post, about this being the best economics book I’ve ever read.
p.229: “as laissez-faire economics was just getting off the ground, the Victorian era saw the ideology of financial deregulation grow up at the same time as, and in many cases faster and more vigorously than, financial regulation itself.” That’s funny.
p.231: “The normal state of the political economy of fraud is one of constant pressure toward laxity and deregulation, and this tends only to be reversed when things have got so bad that the whole system is under imminent threat of losing its legitimacy.” Sounds like social psychology! Regarding the application to economics and finance, I think Davies should mention Galbraith’s classic book on the Great Crash, where this laxity and deregulation thing was discussed in detail.
p.243: Davies says that stock purchases by small investors are very valuable to the market because, as a stockbroker, you can “be reasonably sure that you’re not taking too big a risk that the person selling stock to you knows something about it that you don’t.” Interesting point, I’m sure not new to any trader but interesting to me.
p.251: “After paying fines and closing down the Pittston hole, Russ Mahler started a new oil company called Quanta Resources, and somehow convinced the New York authorities that despite having the same owner, employees, and assets, it was nothing to do with the serial polluter that they had banned in 1976.” This story got me wondering: where the authorities asleep at the switched, or were they bribed, or did they just have a policy of letting fraudsters try again?
As Davies writes on p.284, “comparatively few of the case studies we’ve looked at were first offenses. . . . there’s something about the modern economic system that keeps giving fraudsters second chances and putting people back in positions of responsibility when they’ve proved themselves dishonest.” I guess he should say “political and economic system.”
Davies continues: “This is ‘white-collar crime’ we’re talking about after all; one of its defining characteristics is that it’s carried out by people of the same social class as those responsible for making decision about crime and punishment. We’re too easy on people who look and act like ourselves.” I guess so, but also it can go the other way, right? I think I’m the same social class as Cass Sunstein, but I don’t feel any desire to go easy on him; indeed, it seems to me that, with all the advantages he’s had, he has even less excuse to misrepresent research than someone who came in off the street. From the other direction, he might see me as a sort of class traitor.
p.254: “It’s a crime against the control system of the overall economy, the network of trust and agreement that makes an industrial economy livable.” That’s how I feel about Wolfram research when they hire people to spam my inbox with flattering lies. If even the classy outfits are trying to con me, what does that say about our world?
p.254: “Unless they are controlled, fraudulent business units tend to outcompete honest ones and drive them out of business.” Gresham!
p.269: “Denial, when you are not part of it, is actually a terrifying thing. One watches one’s fellow humans doing things that will damage themselves, while being wholly unable to help.” I agree. This is how I felt when corresponding with the ovulation-and-clothing researchers and with the elections-and-lifespan researchers. The people on the other side of these discussions seemed perfectly sincere; they just couldn’t consider the possibility they might be on the wrong track. (You could say the same about me, except: (1) I did consider the possibility I could be wrong in these cases, and (2) there were statistical arguments on my side; these weren’t just matters of opinion.) Anyway, setting aside if I was right or wrong in these disputes, the denial (as I perceived it) just made me want to cry. I don’t think graduate students are well trained in handling mistakes, and then when they grow up and publish research, they remain stuck in this attitude. I can see how this could be even more upsetting if real money and livelihoods are on the line.
Finally
In the last sentence of the last page of his book, Davies writes, “we are all in debt to those who trust; they are the basis of anything approaching a prosperous and civilised society.”
To which I reply, who are the trusters to which we are in debt? For example, I don’t think we are all in debt to those who trust scams such as Theranos or the Hyperloop, nor are we in debt to the Harvard professor who fell for the forged Jesus document and then tried to explain away its problems rather than just listening to the critics. Nor are we in debt to the administrations of Cornell University, Ohio State University, the University of California, etc., when they did their part to diffuse criticism of bad work being done by their faculty who had been so successful at raising money and getting publicity for their institutions.
I get Davies’s point in the context of his book: if you fall for a Wolfram Research scam (for example), you’re not the bad guy. The bad guy is Wolfram Research, which is taking advantage of your state of relaxation, tapping into the difficult-to-replenish reservoir of trust. In other settings, though, the sucker seems more complicit, not the bad guy, exactly—ultimately the responsibility falls on the fraudsters, not the promoters of the fraud—but their state of trust isn’t doing the rest of us any favors, either. So I’m not really sure what to think about this last bit.
P.S. Sean Manning reviews the book here. Perhaps surprisingly, there’s essentially no overlap between Manning’s comments and mine.