## I hate to get all Gerd Gigerenzer on you here, but . . .

Jonathan Cantor points me to an opinion piece by psychologist Reid Hastie, “Our Gift for Good Stories Blinds Us to the Truth.”

I have mixed feelings about Hastie’s article. On one hand I do think his point is important. It’s not new to me, but presumably it’s new to many readers of bloomberg.com. I like Hastie’s book (with Robyn Dawes), Rational Choice in an Uncertain World, and I’m predisposed to like anything new that he writes.

On the other hand, there’s something about Hastie’s article that bothered me. It seemed a bit smug, as if he thinks he understands the world and wants to just explain it to the rest of us. That could be fine—after all, Hastie is a distinguished psychology researcher—but I wasn’t so clear that he’s so clear on what he’s saying. For example:

The human brain is designed to support two modes of thought: visual and narrative. These forms of thinking are universal across human societies throughout history, develop reliably early in individuals’ lives, and are associated with specialized regions of the brain.

Is that really true? How does math fit into this picture? Or music? Music has a sort of narrative structure but it doesn’t seem quite like a story, either.

Hastie continues:

What isn’t universal or natural is the kind of highly structured cognitive processes that underlie logical and mathematical thinking.

Not natural . . . really? Maybe math is not universal, but certainly it’s natural. I was doing it when I was 2 years old. And music, that does seem to be universal, no?

Later on:

The mathematics of causal reasoning has recently experienced a major change, with the widespread acceptance of Bayesian Causal Networks as a normative, rational model for causal induction and reasoning.

Ummm . . . maybe Hastie is a bit too accepting of this particular story! I think Bayesian inference is great—I wrote two books on the topic!—but I wouldn’t go so far as to call it “a normative, rational model for causal induction and reasoning.” But I suppose that if I feel able to opine about psychology, I can’t object to Hastie expressing his views on statistics.

Hastie continues with a famous example:

The legendary theorists of decision-making Amos Tversky and Daniel Kahneman illustrated [our desire for stories] with the following pair of judgment questions: One group of respondents was asked, “What is the probability that a massive flood will occur sometime in the next year and drown more than 1,000 Americans?” The typical estimate was low (less than 20 percent). But, when another comparable sample of respondents was asked, “What is the probability that an earthquake in California will be followed by a flood in the next year that drowns at least 1,000 Americans?” the estimates were significantly higher.

The irrationality is that the second question is about a much more specific event, an earthquake that would be only one of the several reasons for the flood referred to in the first question. It is logically impossible for the second probability to be higher than the first. But, because the second question provides a plausible scenario for the unlikely outcome in the first query, our innate preference for a good story trumps our logical thinking skills.

This story is a great example of the availability heuristic, but I don’t see how it demonstrates a problem with “our logical thinking skills.” When responding to the first question, many people have difficulty visualizing that massive flood. The second question gives a clue. But I don’t see the combination of responses (coming from different sets of people) as indicating irrationality. Most people are not flood experts. They answer the questions as best they can, and when you give more information they will use it.

I hate to get all Gerd Gigerenzer on you here, but what’s the point of saying that this “trumps our logical thinking skills”? I think Kahneman and Tversky did better, decades ago, by writing of “heuristics and biases.”

What’s the political message here?

The article under discussion concludes with:

So the next time you hear a good story about why the financial recession, or any other economically significant event, was caused by a single collection of bad actors — or how a simple linear narrative “explains” an important event — remember this: Just as we are wired to like a diet rich in fats and sugars, we have an appetite for simple, coherent narratives. Neither habit is good for our long-term health.

(Reid Hastie, a professor of behavioral science at the University of Chicago Booth School of Business, is a contributor to Business Class. The opinions expressed are his own.)

Aaahhhh, now I get the message: The financial crisis is nobody’s fault! Let’s put aside the politics of blame, let’s all work together etc etc. OK, fine. Does this apply to all catastrophes? If you know someone in a plane that crashed, are we allowed to check if the pilot was stoned before takeoff? If someone takes $100,000 from you on a fraudulent pretext, and you catch him, are you allowed to try to collect? Or is it only in the financial crisis that we should set aside all “good stories” and “simple linear narratives”? I agree that our financial problems our complex, and I’m all for warning people about the simplicity of storytelling, but I’m also a bit suspicious of someone from the University of Chicago School of Business telling me not to think about stories of the financial crisis. Getting quantitative Also, I’m surprised that, when people estimate “the probability that a massive flood will occur sometime in the next year and drown more than 1,000 Americans” as less than 20%, Hastie characterizes that estimate as “low.” Even Katrina drowned only 387 people (according to this source which I found by googling Katrina drownings). If a 20% chance of this “massive flood” occurring in a one-year period is “low,” I’d be interested in what Hastie thinks is a more reasonable probability estimate. Responding Hastie’s article bothered me for two reasons. First, what does it mean to it describe “the kind of highly structured cognitive processes that underlie logical and mathematical thinking” as “unnatural.” I don’t quite get what “natural” means here. Second, I see an implicit political message, which seems to be that we shouldn’t blame anyone for the financial crisis: We know there was no single cause or event that set in motion the crisis and that the truth is complex and multicausal. So why do we keep seeking the easy answers? It may be that we are hard-wired to do so. Or, as the guy said in Repo Man, “it’s society’s fault.” I contacted bloomberg.com, the publishers of the above-linked article, but was told: We typically don’t publish opeds responding to articles we’ve published, though we welcome letters to the editor. We also post corrections to pieces containing factual errors and would gladly review any objections you have to Mr. Hastie’s column. Fair enough, but in this case I don’t think the problems would be resolved by a correction note. I’m more bothered by the totality of the piece. For example, the claim that logical reasoning is “unnatural” is not quite a “factual error” but it still seems wrong to me. P.S. Someone who knows the judgment and decision making field better than I do writes: I don’t think that Reid has a political agenda here. (He has only been at Chicago for a few years, and Chicago’s School of Business is not monolithic.) . . . To say that blame narratives are oversimplified is not the same as saying that nobody should be blamed; you may be reading the latter subtext into his text. So maybe I was being unfair. Although I’d feel a little better about Hastie’s column if he’d clarified that, even though stories can be oversimplified, the “life is complicated” defense shouldn’t be used to get people off the hook. Also, I’m still unhappy about the claim that logical and mathematical reasoning is “unnatural.” But this fits with the innumeracy of thinking there’s a greater-than-20%-chance of a major flood in any given year. I feel that, to Hastie, numbers are just words. Which is consistent with the idea that mathematical reasoning is unnatural to him. ### 38 Comments 1. zbicyclist says: An analogy: Two cars both run a 4 way stop, resulting in an accident. If either car had stopped, the accident would have been avoided. Who was to blame? The financial crisis seems like this to me. So many actors, so many messups. This doesn’t let any particular screwup off the hook — maybe if that particular screwup had acted better, the landing would have been softer. I have a meeting in an hour that’s pretty much like this: one unfortunate result, with four departments likely to try to take as little blame for the result (and volunteer for as little of the corrective action) as possible. It was, of course, the other 3 departments that messed up, not mine. ;) • David Huelsbeck says: I think your analogy is better than Hastie’s more convoluted narrative fallacy story while avoiding shortcomings that Andrew exposes. However, I think it could be incrementally improved by replacing the four-way with an uncontrolled intersection. I agree that Hastie might have a political agenda, but it seems more likely that he is just capitalizing on the financial crisis to spruik his own work, even if it is only tangentially related. I’m sure Booth has a PR office that tries to make opportunities for faculty to get their names into media outlets like Bloomberg. That said, I like Hastie’s conclusion, but for an altogether different reason. I think that the various bad-actors narratives are appealing because they offer the false sense of security that we could all still be on the golden escalator to risk-free prosperity if not for those bad-actors. The (probably correct, in my opinion,) explanation, that the economy is extremely complex and vulnerable to catastrophic events that no one really understands, is too scary for most people. They’d rather think that disasters are the work of angry gods that can be appeased by worship, sacrifice and punishment of the wicked. I don’t think that wishing for human agency over that which is currently beyond our control is exactly the same as narrative fallacy. 2. Jonathan says: If you thought that was bad, this was way worse (by another U Chicago Professor – Stephen Kaplan): http://mobile.bloomberg.com/news/2012-04-25/jobs-not-the-1-are-what-make-americans-fret • Andrew says: Jonathan: I don’t actually think Kaplan’s article is so bad. Sure, it’s nothing but a juxtaposition of two time series with the implication of causality—but on the upside it’s clear that this is all it is. Kaplan is making the reasonable point that an increase in top-end inequality alone will not make people miserable. I don’t see that his article contributes anything useful to the discussion (I assume he’d also argue that taxing the rich to pay our bills would be bad for the economy, but he doesn’t actually get to that point), but he’s making a simple enough point that I don’t see his article as misleading. One reason I’m more bothered by Hastie’s article is that I’m more in tune with psychology research than economic research, so I get more disturbed when it gets garbled. • Jonathan says: My concern with the Kaplan article which I expressed in the comments there is that I tried replicating it and could not (I think one year of the GSS has about 50% of the respondents missing). There really was no shift whatsoever in the opinion variables (it hovers around 15%). Most of the shifts that are made are going from Very Happy to Somewhat Happy. Not the polar extremes he was assuming. It seemed he was data mining for something that wasn’t there. This is worse because it could be pretty misleading and inform policy, but from a macro level I agree that the Hastie article is worse. • Andrew says: Aaah, if he got the data wrong, that’s another story! Bad indeed. 3. Ely Spears says: I’m not sure I understand the objections here. After reading it, the original article sounds to me like it is just talking about narrative fallacy ( http://wiki.lesswrong.com/wiki/Narrative_fallacy ). Your points about the smugness are good, and the questions about how math and music fit in deserve to be answered too. The stuff about Bayesian Causal Networks seems interesting, but out of place in this article. However, the comments about the financial crisis being complex and multicausal don’t deserve the criticism you gave. Is it true that he means to imply we cannot blame anyone just because things are complex and multicausal? It doesn’t come off that way to me. He is arguing against oversimplifications and against going after someone “because they are the bad guy” in some easy-to-believe narrative about what happened. We should still go after people who are at fault, just not for that kind of weak reason, and we shouldn’t expect there to be a single, low-resolution explanation that captures all of the causal factors for the financial crisis. While I agree with several of your criticisms of the article, I think the stuff about the financial crisis is just uncontroversial advice about narrative fallacy. Yes, Kahnaman and Tvsersky (and dozens of others) have already described this well and this article is probably not very useful, but it’s not also obviously wrong. In fact, I think in light of some other pieces written on how obfuscated the analysis has been on the financial crisis, this piece does a decent job of clarifying why you shouldn’t seek an unreasonably simple explanation. (See here and the linked paper for example: http://marginalrevolution.com/marginalrevolution/2011/12/andrew-lo-reviews-21-books-on-the-financial-crisis.html ). • Andrew says: Ely: I agree on the part about the narrative fallacy is fine, but most of the column seems to be spent mischaracterizing judgment-and-decision-theory research, which seems odd to me given that this is Hastie’s academic field. When somebody garbles his own research in an article he writes himself, then I get suspicious that there’s an agenda. • Popeye says: It seems to me that Hastie is engaging in a classic two-step that you’ve argued against in other contexts. On one hand, he talks about the narrative fallacy, about how people believe in simple stories. On the other hand, he exploits the narrative fallacy by proposing an easily digestible narrative himself (no one is to blame for the financial crisis, people are just bad logical thinkers who need to believe in villains). Add some smugness and dubious motives and the total package doesn’t seem that different from economics exceptionalism (“economists have a special culture that helps them see that culture doesn’t matter at all”). 4. When Kahneman and Tversky ask that question, some lawyer in the courtroom should jump up and shout, “Objection!” The question people hear is probably not the one K&T think they’re asking. The strictly literal question is: what is the probability of an earthquake AND a flood. What I suspect most people (including me) hear is, “If there were an earthquake in California, what is the probability that it will be followed by a flood that kills 1000 people?” The probability of a flood, given an earthquake is greater than the probability of a flood given no earthquake. Is that incorrect? I don’t know; I’m not an earth scientist. But even before the 2011 earthquake and tsunami, non-scientists probably had the sense that earthquakes might cause a flood. • Dan Nexon says: Yep! • Ely Spears says: The problem is that the second statement is totally subsumed by the first statement, regardless of how you want to read the second statement. Even if you think it is asking you about Pr(flood | earthquake), in the first question you’re asked for the unconditional probability that such a flood will happen next year. So when you answered the first question, you should have taken into account however likely you thought it would be that an earthquake would even happen to cause such a flood. In other words, your answer to the first question should summarize your prior beliefs about the occurrence of *every* different thing that could lead to a flood afterward. Then the second statement is asking a question about only one specific term, an earthquake related term, that would have contributed to your first answer. So either way, something is inconsistent. Either people answer the first question with a poor approximation of their true prior assessments, or else their beliefs don’t obey logical rules of probability, even if they regard the second question’s wording as being conditional (in which case they should also have factored that conditional stuff into their first answer). • The assertion is that the people answering the question are assuming they are allowed to pretend that an earthquake definitely happened, whereas in the unconditional case they have to consider the probability that it will happen vs the probability that it will not happen. P(flood) = P(flood| earthquake) * p(earthquake) + P(flood | no EQ) * p(no EQ) it’s trivial to see that if the person answering believes that they’ve been told to assume that p(earthquake) = 1 that their answer will be very different than if they use their prior for p(earthquake) which is undoubtedly going to put p(earthquake) < 0.25 or so, big earthquakes don't happen all that often and everyone is aware of this. • Ely Spears says: I see your point in this case. If the readers interpret it as allowing them to assume P(earthquake) = 1, then you’re right. In my reading though, even if they interpret it conditionally, the temporal aspect of the question makes this the P=1 interpretation invalid. P(flood follows earthquake within the next year) must be smaller than P(flood happens within the next year). There have been other experiments with this same setup but not involving the flood example. Some are listed in Kahneman’s recent book, others are in the Kahneman/Tvsersky papers. Maybe one of them is to your liking. But even when the question has been more explicitly phrased as “Is P(flood follows earthquake within the next year) greater than or less than P(flood happens within the next year)?” people still report larger estimates for the first. Consider the famous “Linda” example, of being a bankteller who is a feminist vs. being just a bank teller. Even grad students of probability and stats ordered the probability of those events incorrectly: ( http://en.wikipedia.org/wiki/Conjunction_fallacy ). In re-reading that article, it relates to this post’s title in that Gigerenzer criticized the legitimacy of the Linda experiment. FWIW, I find Gigerenzer’s criticisms lacking. He misses the point; it’s not framing.. it’s that humans substitute intuitive plausibility when needing to estimate probability, because intuitive plausibility is easier and our brains are empirically lazy. Conjunction fallacy is an example of the substitution effect, answering a different question because it requires less cognitive effort. • revo11 says: @Ely I think you’re incorrect. As you say, it is very easy for a casual listener to mistakenly parse the 2nd question as asking for Pr(flood | earthquake). However, Pr(flood | earthquake) > Pr(flood) does not violate any laws of probability and in fact, you’d probably be correct (since this is relating to a coastal area). You might be thinking of the total probability theorem, which implies Pr(flood & earthquake) <= Pr(flood). I find it really questionable to use the responses as an estimate of the proportion of the population making a conjunction fallacy – it's going to be a mixture of people that misparse the question and people that are genuinely making a conjunction fallacy. I wouldn't be surprised if the majority simply misparsed the question … especially with the extra verbage about "drowns at least 1000 Americans" to keep track of. Seems like a poorly phrased question for such a seminal paper, but hopefully this is just a second/third-hand description of the study and they make this caveat in the original. • revo11 says: edit: “you’d probably be correct” -> “is probably correct” • Ely Spears says: I think you misunderstood me, but it was my fault for poor wording. I was thinking that if they interpreted the question conditionally, then they had to incorporate their prior belief. Since P(flood) = P(flood| earthquake) * p(earthquake) + P(flood | no EQ) * p(no EQ), then the first term, P(flood| earthquake) * p(earthquake) must be \leq P(flood). I was not talking about P(flood | earthquake) in isolation, but I failed to make that clear. I agree that the statement of the problem should be made better so as to be very sure the subjects are not making this confusion, but as in my reply above to Daniel Lakeland, I think the classic Linda example shows this same artifact without nearly as much effect from wording. I’m convinced, at least, that this is a real effect and that conjunction fallacy permeates evolved human cognition. I’m fairly a nobody though, so my believing that doesn’t count for much :P • Steve Sailer says: “What is the probability that an earthquake in California will be followed by a flood in the next year that drowns at least 1,000 Americans?” When I was a Boy Scout during the 2/9/1971 Sylmar earthquake in California, we mobilized to help evacuate the west half of the San Fernando Valley because of fears that the Van Norman dam would collapse. In general, I find Kahneman’s book pretty aspergery. My review is of the book is here: http://takimag.com/article/the_irrational_agent/print#axzz1utcF9Nuo • Andrew says: Steve: Your review is interesting, but, to be fair, many of the experiments of Kahneman, Tversky, and that crew are pretty bullet-proof. They and their psychologist colleagues are well aware of Grice’s principle and other issues involved in these measurements. This is not to say that everything K&T said was correct, but they did avoid a lot of the obvious flaws that a casual reader would think of. Remember, this work has survived decades of peer review and replication. Topics such as the availability heuristic have been studied all over the place. • Steve Sailer says: Sorry, but you’re missing my point, which is that it’s hardly surprising that Kahneman found that it’s easy to fool people, because conmen and comedians have been doing it forever. • Andrew says: Sure, but their results are pretty robust. And in their early experiments, in which they asked research psychologists various questions about sample size and statistical significance, they weren’t trying to fool people; they were asking straightforward statistics questions that just turned out to be difficult. It took awhile before they abstracted their results down to some very clean “cognitive illusions.” 5. Thank you for this post! A friend linked me to that opinion piece last week and I found it distasteful. I ended up expressing my critiques in terms of ignoring all of the ways in which “simple linear narratives” about the crisis are produced and disseminated by organizations, not individuals, and the piece completely ignores this. NPR’s Planet Money and This American Life produced fantastically useful and complicated narratives of the financial crisis and received widespread acclaim for so doing. Perhaps Fox News and CNN fall back more often on “simple linear narratives.” But either way, we have to ask why these organizations treat the story the way they do – and organizational thinking is distinct from individual thinking (here lies the difference between Herbert Simon’s bounded rationality, where organizational design can produce more or less rational organizations, and Kahneman and Tversky’s, where failures of perfect rationality are endemic to basic human reasoning). Your analysis adds a lot more depth to what’s wrong with the statistical part of the argument and was very helpful – it hadn’t even occurred to me to check how common actual floods killing 1,000 or more are, but of course they are quite rare in the US. 6. t says: 1. I don’t think that’s a quote from Repo Man. IMDB has this: Duke: The lights are growing dim Otto. I know a life of crime has led me to this sorry fate, and yet, I blame society. Society made me what I am. Otto: That’s bullshit. You’re a white suburban punk just like me. Duke: Yeah, but it still hurts. 2. As a film, Repo Man unquestionably comes down on the side of lenders: Bud: Credit is a sacred trust, it’s what our free society is founded on. Do you think they give a damn about their bills in Russia? I said, do you think they give a damn about their bills in Russia? Otto: They don’t pay bills in Russia, it’s all free. Bud: All free? Free my ass. What are you, a fuckin’ commie? Huh? Otto: No, I ain’t no commie. Bud: Well, you better not be. I don’t want no commies in my car. No Christians either. 3. I bet I could pull something like the “logical reasoning is not natural” line from a million articles. You really are upset about the idea that he is excusing the banks. But exactly like the Repo Man quote on credit, I blame borrowers for taking too many risks that I would not myself have ever taken, and I blame Fannie Mae and many Congresses and several Presidents for imagining that they could change people by making them into home owners with no money down. (If I need a sponsor for my feelings to be worthy, I probably get this from Arnold Kling’s many posts about it on Econlog.) Should I–and Kling–be upset that Hastie is letting borrowers and government off the hook, exactly as you are upset he is letting banks off the hook? 4. • Andrew says: 1. Dammit! That’s what happens when I try to remember a quote from a movie I saw almost 30 years ago. 2. Yes, I’m upset that he’s letting borrowers and government off the hook too. • Steve Sailer says: You should watch “Repo Man” more frequently. I do. • Steve Sailer says: From Repo Man, on causality and probability: “A lot o’ people don’t realize what’s really going on. They view life as a bunch o’ unconnected incidents ‘n things. They don’t realize that there’s this, like, lattice o’ coincidence that lays on top o’ everything. Give you an example, show you what I mean: suppose you’re thinkin’ about a plate o’ shrimp. Suddenly someone’ll say, like, “plate,” or “shrimp,” or “plate o’ shrimp” out of the blue, no explanation. No point in lookin’ for one, either. It’s all part of a cosmic unconsciousness.” 7. Brian says: “I’m more bothered by the totality of the piece. For example, the claim that logical reasoning is “unnatural” is not quite a “factual error” but it still seems wrong to me.” Unfortunately this sort of stuff has been doing the rounds for a good few decades – the idea that our brains aren’t “wired” for statistical or logical thinking. Most evolutionary psychologists take issue with this (Cosmides, Gigerenzer, etc.), but it’s a pretty mainstream view beyond just pop psychology. Dan Gilbert made a similar argument to Hastie’s in Nature last summer, which we responded to with a slightly ranty letter (shameless self promotion): http://www.nature.com/nature/journal/v474/n7351/full/474275a.html http://www.nature.com/nature/journal/v475/n7357/full/475455c.html 8. Nick says: Well, since the postmodernists got us to mistrust meta-narratives, I guess it was only a matter of time until we started to mistrust narratives themselves. Who needs to understand causal relationships at all when you can have the big machine in the cloud tell you that if you bought X, you might like y? as if he thinks he understands the world and wants to just explain it to the rest of us. That’s it exactly. Translation: don’t believe in the power of narratives, except for mine… 9. Kevin Dick says: I agree that “unnatural” is probably not the best description of logical thought. I think Kahneman’s characterization in _Thinking Fast and Slow_ is better. Logical thought is dramatically more “expensive” and the brain seeks to minimize costs. But in a popular article where he obviously wants to grab the reader’s attention and shift his opinion, “unnatural” seems within the realm of poetic license. Also, I think his political message is more “let’s not make things worse by overreacting on one dimension”. In this sense, he’s trying to counteract a common politician’s reaction which is to attempt to find a scapegoat. Again, this is what you would expect in a popular article where you may be trying to shift the balance of opinion toward what you think is correct by somewhat overstating your case. 10. [...] Gelman points us to a peculiar Bloomberg column by Reid Hastie that refers to some well-known research results from [...] 11. Jonathan says: I’m sorry all, but this also got even worse. In the manner of freakonomics I give to you Roe V. Wade stimulates innovation: http://www.bloomberg.com/news/2012-05-14/how-roe-v-wade-empowered-u-s-investors.html • Andrew says: What the hell . . . ? On the other hand, it says “The opinions expressed are his own.” At this point, it looks like bloomberg.com is just trolling. I won’t dignify this guy with a blog post, but if I were to write something, I might suggest a few alternative theories, such as the idea that this guy’s big bucks are due to Ross Perot’s big ears. After all, if Perot had smaller ears, he probably would’ve won the 1992 election, then no Bill Clinton, no tech boom, no budget surplus . . . and no free money available for the Bush tax cuts. No Bush tax cuts mean that rich dudes (sorry, I mean “investors”) would have less extra$, probably after the country club memberships, Beemers, and rehab clinic fees they wouldn’t have had the spare cash needed to prop up the stock market, then we would’ve had the crash in 2004 instead of 2008, thus no housing bubble . . . hey, maybe we should’ve all voted for Perot back then!

12. Steve Sailer says:

If average people weren’t pretty decent at noticing probabilistic patterns, then Our Betters wouldn’t constantly have to be lecturing us against stereotyping and profiling.

I will certainly admit that most people aren’t very good at reasoning about probabilistic patterns (including the intellectuals who denounce stereotyping), but people are not at all bad at picking up statistical differences.

• Jonathan says:

No they just misinterpret them considerably.

13. RSA says:

Reid Hastie:

The typical estimate was low (less than 20 percent). But, when another comparable sample of respondents was asked, “What is the probability that an earthquake in California will be followed by a flood in the next year that drowns at least 1,000 Americans?” the estimates were significantly higher.

Here’s a strange thing. Out of curiosity, I looked up Tversky and Kahneman’s article to find this: “The estimates of the conjunction (earthquake and flood) were significantly higher than the estimates of the flood (p < .01, by a Mann-Whitney test). The respective geometric means were 3.1% and 2.2%." So "less than 20 percent" is literally correct, but misleading, and "estimates were significantly higher" is flat-out wrong.

14. [...] article on the the human tendency to overvalue information presented as stories, Reid Hastie writes: Andrew (and Commenters) … I’d like to try to clarify some of the statements and implications [...]