(Forthcoming in Psychology Today) TL;DR key points
2. They are confident in things which are outright false 3. They countenance the “impossible” and are “paranoid” 4. They avoid risks that don’t happen 5. They pursue opportunities that fail 6. They are often irrational 7. They do things that are often “crazy” or “unconventional”
2. Learn norms of reasoning 3. Think in terms of expected utility theory
THE IMPORTANCE OF RECOGNIZING WHAT'S RATIONAL AND WHAT'S NOT If someone was as rational as could be—with many accurate and trustworthy judgments about the world, and with sound decisions—would we recognize it? There are reasons to think the answer is “No”. In this piece, I aim to challenge prevailing intuitions about rationality: I will argue that the philosophy and science of judgment and decision-making reveal a number of ways in which what appears to be rational diverges from what actually is rational. This piece takes its title from Steven Covey’s well-known book “The Seven Habits of Highly Effective People”. I will argue that, similarly, there are seven habits of highly rational people—but these habits can appear so counter-intuitive that others label them as “irrational”. Of course, the rationality of these habits might be obvious to specialists in judgment and decision-making, but I find they are often not so obvious to others of the sort for whom this piece is written. In any case, not only are these habits potentially interesting in their own right, but recognizing them may also help to open our minds, to help us better understand the nature of rationality and to better identify the judgments and decisions we should trust—or not trust—in our own lives. Without further ado, then, I present… THE SEVEN "IRRATIONAL" HABITS OF HIGHLY RATIONAL PEOPLE 1. Highly rational people are confident in things despite “no good evidence” for them The first habit of highly rational people is that they are sometimes confident in things when others think there is “no good evidence” for them. One case where this shows up extremely clearly is the Monty Hall problem, as I discuss in detail in a blogpost here. In the problem, a prize is randomly placed behind one of three doors, you select a door and then the gameshow host—named Monty Hall—will open one of the other doors that does not conceal the prize. If you select a door and that door conceals the prize, then Monty Hall will open either of the other two doors with an equal likelihood. But if the door you select does not conceal the prize, then Monty Hall must open the only other door that does not conceal the prize and which you did not select. In these circumstances, as I explain in the blogpost, if you select door A and Monty Hall opens door C, then there’s a 2/3 probability that door B conceals the prize. In this case, then, door C being opened constitutes “evidence” that door B conceals the prize. Furthermore, let us consider an adaptation called the “new Monty Hall problem”: in this case, door C would be opened with a 10% likelihood if door A conceals the prize, in which case there’s provably a 91% probability that door B conceals the prize after door C is opened. In this version, the truly rational response is to be very confident that door B conceals the prize. But despite this, in my experiments, everyone without training who encountered these problems got the wrong answer, and the vast majority thought door B had only a 50% probability of concealing the prize in both versions of the problem. This effectively means they thought there was no good evidence for door B concealing the prize when there in fact was! What’s more, the studies found that not only did participants not recognize this good evidence, but they were also more confident in their incorrect answers. Compared to participants who were trained and more likely to get the correct answers to these problems, the other participants were on average both more confident in the correctness of their (actually incorrect) answers and they thought they had a better understanding of why their (actually incorrect) answers were correct. What this shows is that truly rational people may recognize objectively good evidence for hypotheses where others think there is none—leading them to be confident in things in ways which others think are irrational. In the blogpost, I also discuss some more realistic scenarios where this in principle could occur--including some from medicine, law and daily life. 2. They are confident in things that are outright false But even if someone is rationally confident in something, that thing might be false a particular proportion of the time. In fact, according to one norm of trustworthy judgments, a perfectly accurate person would often be 90% confident in things that are false approximately 10% of the time. In other words, a perfectly accurate person would be “well calibrated” in the rough sense that, in normal circumstances, anything they assign 90% probabilities to will be true approximately 90% of the time, anything they assign 80% probabilities to will be true approximately 80% of the time and so on. We can see this when we look at well calibrated forecasters who might assign high probabilities to a bunch of unique events, and while most of those will happen, some of them will not—as I discuss in detail here. Yet if we focus on a small sample of cases, they might look less rational than they are since they will be confident in things that are outright false. 3. They countenance the “impossible” and are “paranoid” However, studies suggest many people—including experts with PhDs about their domain, doctors, jurors and the general public—are not so well calibrated. One example of this is miscalibrated certainty—that is, when people are certain (or virtually certain) of things which turn out to be false. For instance, Philip Tetlock tracked the accuracy of a group of political experts’ long-term predictions, and he found that out of all the things they were 100% certain would not occur, those things actually did occur 19% of the time. Other studies likewise suggest people can be highly confident in things which are false a significant portion of the time. But a perfectly rational person wouldn’t be so miscalibrated about these things which others are certain about, and so they would assign higher probabilities to things which others would think are “impossible”. For example, a perfectly calibrated person would perhaps assign 19% probabilities to the events which Tetlock’s experts were inaccurately certain would not happen—or they might even assign some of them much higher probabilities, like 99%, if they had sufficiently good evidence for them. In such a case, the perfectly rational person would look quite “irrational” from the perspective of Tetlock’s experts. But insofar as miscalibrated certainty is widespread among experts or the general public, so too would be the perception that truly rational people are “irrational” in virtue of them countenancing what others irrationally consider to be “improbable” at best or “impossible” at worst. Furthermore, when one has miscalibrated certainty about outcomes that are “bad”, not only will a rational person look like they believe in the possibility of “impossible” outcomes, but the rational person will look irrationally “paranoid” in doing so since the supposedly “impossible” outcomes are bad. 4. They avoid risks that don’t happen But not only will a rational person look “irrational” or “paranoid” in virtue of thinking the “impossible” is possible or even (probably) true, but they will also act to reduce risks which never actually happen. This is because our leading theory of rational decision-making claims that we should make decisions not just based on how probable or improbable outcomes are, but rather based on the so-called expected utility of those outcomes, where the expected utility of an outcome is the product of both its probability and how good or bad it is. This sometimes entails that we should avoid decisions if there is an improbable chance of something really bad happening. For example, it can be rational to avoid playing Russian roulette even if the gun is unlikely to fire a bullet, simply because the off-chance of a bullet firing is so bad that this outcome has high expected negative utility. Likewise, for many other decisions in life, it may be rational to avoid decisions if they have improbable outcomes that are sufficiently bad. This consequently means a rational person could often act to avoid many risks that never actually happen. But as is well-known, people often evaluate the goodness of a decision based on its outcome, and if the bad thing does not happen, the average person might evaluate that decision as “irrational”. This kind of thing could happen quite often too: for example, if for every highly negative outcome with a probability of 10% and which a rational decision-maker would avoid, then they will be avoiding negative outcomes which simply don’t happen 90% of the time, potentially making them look quite irrational. The situation is even worse if the evaluator has the "miscalibrated certainty" we considered earlier, and the outcome not only does not happen, but rather it looks like it was always “impossible” from the perspective of the evaluator. 5. They pursue opportunities that fail But the same moral holds not only for decisions that avoid risk, but also for decisions that pursue reward. For example, a rational decision-maker might accept an amazing job offer which has merely a 10% chance of continued employment if the possibility of continued employment is sufficiently good. But of course, that decision to accept that job has a 90% chance of resulting in unemployment, potentially making the decision again seem like a “failure” if the probable happens. More generally, a rational decision-maker would pursue risky options with 90% chances of failure if the options are sufficiently good all things considered: it is like buying a lottery ticket with a mere 10% chance of winning but with a sufficiently high reward. But again, the rational decision-maker could look highly “irrational” in the 90% of cases where those decisions lead to the less-than-ideal outcomes. In any case, what both this habit and the preceding one have in common is that rational decision-making requires making decisions that lead to the best outcomes over many decisions in the long-run, but humans often evaluate decision-making strategies based on mere one-off cases. 6. They are often irrational Despite that, though, arguably any realistic person who is as rational as could be would still be genuinely irrational to some degree. This is because our dominant theory of judgment and decision-making--dual process theory—entails that while we often make reflective judgments and decisions, there are countless situations where we do not and simply cannot. Instead, the literature commonly affirms that everyone employs a set of so-called heuristics for judgment and decision-making which—while often adequate—also often lead to sub-optimal outcomes. Consequently, even if someone was as rational as could be, they would still make irrational judgments and decisions in countless other contexts where they cannot be expected to rely on their more reflective faculties. If we then focus solely on these unreflective contexts, we would get an inaccurate impression of how rational they are overall. 7. They do things that are often “crazy” or “unconventional” All of the preceding thoughts then entail that rational people may do things that seem “crazy” or “unconventional” by common standards: they might believe in seemingly impossible things, or act to reduce risks that never happen, or pursue opportunities that never materialize, and so on. This might express itself in weird habits, beliefs, or in many other ways. But this shouldn’t be too surprising. After all, the history of humanity is a history of common practices which later generations appraise as unjustified or irrational. Large portions of humanity once believed that the earth was flat, that the earth was at the center of the universe, that women were supposedly incapable or unsuitable for voting, and so on and so forth. Have we then finally reached the apex of understanding in humanity’s evolution, a point where everything we now do and say will appear perfectly rational by future standards? If history is anything to go by, then surely the answer is “No”. And if that is the case, then perhaps the truly "rational" will be ahead of the rest—believing or doing things which seem crazy or irrational by our currently common standards. HOW TO DISTINGUISH THE RATIONAL FROM THE IRRATIONAL What I hope to have conveyed, then, is just how frequently our untrained intuitions about what is rational may diverge from what is truly rational: what’s rational might appear “irrational”, and vice versa. In a world where these intuitions might lead us astray, then, how can we really tell rational from irrational, accurate from inaccurate or wisdom from unwisdom?
Some common rules of thumb might not work too well. For example, sometimes the evidence fails to find that years of experience, age or educational degrees improve accuracy, at least in domains like geopolitical forecasting. What follows, then, are some suggestions which I think are supported by the evidence: Suggestion #1: Measure calibration First, track the calibration of the judgments you care about—whether they are yours or others. I provide some tools and ideas for how to do this here. This can help us to put things in perspective, to avoid focusing on single-cases and to detect pervasive miscalibration which can afflict our decision-making. And as other studies suggest, past accuracy is the greatest predictor of future accuracy. Suggestion #2: Learn norms of reasoning Additionally, I would suggest learning and practicing various norms of reasoning. These include the evidence-based suggestions for forming more accurate judgments in my book Human Judgment, such as practicing active open-minded thinking and thinking in terms of statistics. It also includes other norms, such as so-called “Bayesian reasoning” which can produce more accurate judgments in the Monty Hall problem and potentially other contexts, as I discuss here and here. Suggestion #3: Think in terms of expected utility Finally, when evaluating the rationality of someone’s decisions, think in terms of expected utility theory. Expected utility theory is complicated, but here is a potentially helpful introduction to it (and from my former PhD advisor—a really awesome person!). In short, though, expected utility theory requires us to ask what probabilities people attach to outcomes, how much they value those outcomes and—on my preferred version of it—whether their probabilities are calibrated and their values are in some sense objectively “correct”. Then, we can ask whether they are making decisions which lead to the best possible outcomes in the long-run. In these ways, I think we can better tell what’s rational from what’s not in a world where our intuitions can otherwise lead us astray.
0 Comments
Leave a Reply. |
AuthorJohn Wilcox Archives
August 2024
Categories
All
|