(Forthcoming in Psychology Today) THE TL;DR KEY POINTS
THE IMPORTANCE OF RECOGNIZING GOOD EVIDENCE We all need to form accurate judgments about the world in many diverse and important contexts. What is the correct diagnosis for someone’s medical condition? Does someone have a crush on you? Did the defendant kill the victim? Here, I will discuss how the evidence can reveal the truth about these questions--and potentially others which you might care about--but only if we think in the right ways. It’s well-documented that various biases can hinder us in our quest for truth. In a recently published paper in Judgment and Decision Making (freely available here), I introduce a new cognitive bias: likelihood neglect bias. Understanding this bias, and how to overcome it, can help us recognize good evidence and find the truth in numerous cases where others might not. To show this, though, I’ll use a well-known brain-teaser which reveals this bias—the Monty Hall problem—and then I’ll apply the emerging ideas to show how we can find the truth in other realistic cases—including medicine, law and more mundane topics. You might then want to apply these ideas to other cases which you might care about. THE MONTY HALL PROBLEM In the Monty Hall problem, a prize is randomly placed behind one of three doors. You select a door, which remains closed for the time being. If the prize is behind the door which you selected, then the gameshow host, Monty Hall, will randomly open one of the other two doors with an equal probability. But if the prize is behind one of the doors which you did not initially select, then he will open the other door which does not conceal the prize and which you did not select. In my experiments, participants read a description of the problem and then are asked to imagine that they select door A and that Monty Hall opens door C, like in the very high-quality picture below: The question, then, is this: what is the probability that the prize is behind door B after Monty Hall opens door C? Most people think the answer is 1/2 or 50%. But actually, the correct answer is that it has a 2/3 or 66% probability of concealing the prize. However, virtually no one without training gets it right when they first encounter the problem, including my once younger self and all the 50 participants in one group of my experiments, for instance. In fact, it is because the right answer is so counter-intuitive that one cognitive scientist has called the Monty Hall problem “the most expressive example of cognitive illusions or mental tunnels in which even the finest and best-trained minds get trapped” (Piattelli-Palmarini, 1994a, p. 161). To see how the correct solution is indeed correct, though, let us consider the probabilistic explanation before applying the same ideas to some more important real-life problems. LIKELIHOOD NEGLECT BIAS AND THE EXPLANATION OF THE MONTY HALL PROBLEM From a probabilistic standpoint, the sole reason door B more probably conceals the prize than door A is because of the likelihood that Monty Hall would open door C if door B conceals the prize. Here, a “likelihood” is a technical term used in probability theory: it refers to the probability of the evidence that we are certain about if we assume the truth of the hypothesis that we are not certain about. In this case, we are certain about door C being opened, but not about which door conceals the prize, so the likelihood refers to door C being opened given that a particular door conceals the prize. The likelihood of the evidence differs for whether door A or door B conceals the prize. If door A conceals the prize, then there’s a 50% likelihood that Monty Hall would open door C, since he could have opened either door B or C. But if door B conceals the prize, then there’s a 100% likelihood that Monty Hall would open door C, since he cannot open the door that you selected (door A) nor the door that conceals the prize (door B), so he must open the only remaining door (door C). From a probabilistic perspective, the fact that door C being opened is twice as likely if door B conceals the prize (and the fact the prize was randomly placed behind one of the doors) makes it the case that door B is twice as probable to conceal the prize than door A. Mathematicians calculate this using Bayes’ theorem as below (although you don’t need to know Bayes’ theorem or anything aside from the importance of likelihoods here): where P(c) is the probability door B conceals the prize given that door C is opened, P(A), P(B) and P(C) are all the prior probabilities that the respective doors conceal the prize and P(c|A), P(c|B) and P(c|C) are the likelihoods of door C being opened given the respective hypotheses. Again, the formula isn’t too important, but what is important is that door B more probably conceals the prize because of the likelihoods—because P(c|A)=50% and P(c|B)=100%. What my experiments showed, however, is that most participants were aware of what the likelihoods were, but they didn’t realize that these likelihoods favored one hypothesis over another (and for other explanations of this and the Monty Hall problem, see the Appendix below). This, then, is likelihood neglect bias: by definition, it is being aware that the evidence is more likely given one hypothesis compared to another, all while failing to realize that the evidence therefore raises the probability of the former hypothesis relative to the latter. Technically, likelihood neglect bias is a violation of what is known as the law of likelihood--a law of probability which states (roughly speaking) that if the evidence is more likely given one hypothesis compared to another, then the evidence necessarily raises the probability of the former hypothesis relative to the latter. The strength which the evidence provides for the hypothesis is measured by the so-called likelihood ratio—that is, how much more likely the evidence is assuming one hypothesis compared to the other. In the original Monty Hall problem, the likelihood ratio is 100%/50%=2, meaning that door C is twice as likely to be opened if door B conceals the prize than if door A conceals the prize. Overcoming likelihood neglect bias requires us to apply the law of likelihood in our assessment of the evidence, thereby raising our probabilities for hypotheses and getting closer to the truth in cases where others do not. RECOGNIZING STRONG EVIDENCE WHERE OTHERS DO NOT: LIKELIHOOD NEGLECT BIAS AND THE NEW MONTY HALL PROBLEM In a sense, then, overcoming likelihood neglect bias can potentially help us recognize strong evidence and find the (probable) truth where others can not. To see how this is, though, we could consider another innovation of the experiments: they showed that not only do people get incorrect probabilities for the original Monty Hall problem, but they also get the incorrect probabilities for an adaptation of the problem where there’s objectively strong evidence to favor door B over door A. More specifically, in another experiment, participants were presented with what I call the “new Monty Hall problem”, a problem where the likelihood of door C being opened given that door A conceals the prize is merely 10%—not 50% like in the original Monty Hall problem. In this case, the likelihood ratio changes to 100%/10%=10, meaning door C is 10 times more likely to be opened if door B conceals the prize than if door A conceals the prize. Under these conditions, one can prove with mathematics and simulations that the probability that door B conceals the prize after door C is opened if 91%—as I show in the paper and its supplementary materials. But despite this, 84% of participants—or 21 of 25 of them—thought the evidence was effectively irrelevant and that door B had merely a 50% probability of concealing the prize. (And the four other participants gave flawed answers for different reasons too.) Importantly, though, most of the participants were aware of the likelihoods—they knew that door C being opened was much more likely given that door B conceals the prize than that door A conceals the prize—but remarkably, this did not affect their probabilities for the hypotheses at all. This, again, is likelihood neglect bias: it is realizing the evidence is more likely given one hypothesis than another, all while not realizing that this evidence favors that hypothesis over the other. What this shows, then, is that sometimes the world can present us with strong evidence to favor some hypotheses over others, yet we might be oblivious of how this is so if we neglect the likelihood of the evidence. The evidence could potentially favor hypotheses to the point of virtual certainty too, especially if, say, there are multiple pieces of evidence where we neglect the likelihoods. For instance, if there are just two pieces of evidence as strong as door C being opened in the new Monty Hall problem, then one can be 99% certain of the relevant hypothesis—and this is in a case where we know humans fail to think the evidence is relevant at all! The evidence could in principle be so strong as to virtually reveal the truth, but we might fail to realize this if we neglect the likelihoods and do not think about the evidence in the right ways. EXAMPLES IN REALISTIC CONTEXTS To see how this could be the case in more important and realistic scenarios, let us consider a few examples, each of which are adapted from real situations where subsequent evidence supported the favored hypothesis (albeit with the details changed to preserve anonymity). Example #1: Medical diagnosis and treatment Suppose Tabatha has a debilitating medical condition but does not know for sure what is causing it. Suppose she furthermore has some hunches and tries a particular therapy with some suggestive but severely limited studies supporting its efficacy. She then assigns the therapy an initial 10% probability of working, but she tries it for a few months, and she starts feeling much better. Now, if the therapy was effective, we can suppose there would be a 100% likelihood she would have a remarkable improvement (and I will discuss how we can assign these values later, although the exact values of the likelihoods do not matter so much as the general ideas). On the other hand, though, if the therapy was ineffective, how do we determine the likelihood she would have recovered by chance? One demonstrably accurate method of assigning probabilities is by looking at the frequency with which she has had similar improvements in her condition in the past, since study after study after study have suggested the past can help us assign probabilities. To do this, then, suppose Tabatha has had the disease for 3 years and has never seen a comparable improvement in her condition before. Then, using this historical information, she could accurately assign a likelihood of her remarkably improving given chance that is 1/156 or approximately 0.6%, since she had not improved remarkably for any of the past 156 months. In this case, we then have a likelihood ratio of 100%/0.6%=166, meaning the recovery is about 166 more times likely if the therapy is effective than if it is not. If this is the case, then using Bayes’ theorem (like one can in this post here), the probability that the therapy is effective given her remarkable recovery is approximately 95%, and she can be confident that the therapy works. However, if she commits likelihood neglect and instead attributes her improvement to mere “chance”, then she could fail to realize the implications of the evidence which strongly favors one hypothesis over another. Of course, one alternative explanation is that she is improving merely because of a placebo effect: perhaps her trying the treatment gives her the hope or belief that she will improve, and this itself causes the improvement. Placebo effects are real, and that is why there are widespread experimental protocols to rule them out. But in this case, we could suppose she has evidence to discredit this explanation. Perhaps, for example, she has had years of trying alternative therapies, some of which she was even more confident in because they were recommended by experts. But if those therapies did not improve her condition despite their potential for a placebo effect, then the probability of the placebo effect would be undermined by this evidence of historical frequencies. Likewise, when getting into details, there may also be evidence to undermine other alternatives, in which case Tabatha can be virtually certain the therapy is effective if she avoids likelihood neglect. Example #2: Someone having a crush on you Let us use another example which has—for educational purposes—turned out to engage my students’ interest in the past: whether someone has a crush on you. Suppose you and another person notice each other a lot in a dining hall at your college; you both make eye contact a lot and you assign a conservative 30% initial probability to them having a crush on you, since you have historically noticed that at least 30% of people who make eye contact with you like that have turned out to like you. Now suppose that one lunch time, the dining hall is virtually empty, there are 65 free tables around, but when this person comes in to get lunch, they choose to sit at the table directly in front of you. You are wondering, then, if them sitting in front of you means they like you, and again, the likelihoods could provide an answer. In particular, suppose that if they do not like you, then they could have sat at any of the 65 tables, and so the likelihood of them sitting at the table in front of you is 1/65 or 1.5%—very unlikely. However, if they like you, then we could suppose the likelihood of them sitting at the table in front of you is about 70%—and much more likely—because they might want to sit there to get your attention, to encourage you to approach them and start a conversation or whatever. If that is the case, then the likelihood ratio would be 70%/(1/65)=45.5 and would again mean that the evidence strongly favors the hypothesis that they like you—in this case with a probability of 95%. But again, if we merely attribute the occurrence to “chance” while neglecting the likelihoods, we might similarly not realize how strong the evidence is. Example #3: Law and the attribution of a crime Let us use another example that is based on an actual scenario (albeit with some details modified), this time concerning law. Jemaine is an investigative journalist based in central Africa, and he publishes a news piece that is critical of a local gang. Six months later, his house is burned down. However, it is known that some in the gang did not like the critical news piece, and there have been rumors they have retaliated against others before. What, then, is the probability that Jemaine’s house was sabotaged? Again, if we rely on well-documented heuristics and assign the “accident” hypothesis a high probability merely because it seems plausible or consistent with the evidence, then we might commit likelihood neglect and fail to recognize evidence for the truth. To think more carefully about the case, we could assign initial probabilities and then consider likelihoods. There are multiple rumors that the gang has retaliated against critics before, and so suppose we assign an initial 10% probability to the hypothesis that Jemaine would experience retaliation before updating on the evidence about the house burning down. (That said, if the gang is sufficiently powerful and motivated, it could be very difficult to determine the true frequency with which they successfully execute covert arson.) Then, we can think about the likelihood of Jemaine’s house burning down given that he was or wasn’t retaliated against. Suppose that if he was retaliated against, there’s a 20% likelihood he would have his house burned down, since there are also other ways in which he could have been retaliated against—such as a plane crash, staged suicide and so on. On the other hand, suppose that if he was not retaliated against, then there is a 1/10,000 likelihood his house would burn down, since although house burnings occasionally happen, they are still exceedingly improbable relative to the vastly larger number of cases where house burnings never occur. In this case, the likelihood ratio would be 20%/(1/10,000)=2,000, meaning the house burning down is 2,000 times more likely if he was retaliated against. Again, then, using Bayes’ theorem, we can be 99.5% confident Jemaine’s house was burned down by the gang in central Africa. And in the real-life case which this example was adapted from, an informant close to the “gang hierarchy” did indeed confirm Jemaine was retaliated against despite the gang’s attempts to cover it up. But again, if we neglect likelihoods and rely merely on what seems “plausible” or “consistent” with the evidence, we could fail to recognize strong evidence when it can reveal the truth to us. CONCLUDING THOUGHTS Of course, in the examples above, some people might naturally think the evidence is relevant and not neglect the likelihoods, and it’s an open-question as to exactly when people neglect likelihoods. However, from experience, I know that some people can neglect likelihoods in cases like these, even if others do not. In any case, the point is that the experimental evidence suggests we can sometimes neglect likelihoods, potentially in more important contexts too. That said, an important warning: even if we do not neglect likelihoods, there are many ways we might try to reason about them unsuccessfully. For example, we might assign incorrect likelihoods in the first place, or we might not properly take into account so-called “auxiliary hypotheses”, or we might not use valid tools like Bayes’ theorem to correctly revise our views given information about likelihoods. I discuss how to avoid these and other errors in another post here. For now, though, here’s an appendix of other explanations for the Monty Hall problem. APPENDIX: MORE EXPLANATIONS OF THE MONTY HALL PROBLEM Above, I gave an explanation of the Monty Hall problem in terms of probability theory, but there’s other ways to explain the correct answer to the problem too. Explanation #2: The Many Doors Adaptation One style of explanation comes from the genius Marilyn vos Savant. Suppose we adapt the problem: there are 1,000 doors instead of 3, Monty Hall randomly places a prize behind one of the doors, you select one, and then he opens every other door except one. Suppose you select, say, door 358, and then Monty Hall opens every other door except door 721. Intuitively, it’s much, much more probable that door 721 conceals the prize than that the door you happened to select conceals the prize. In fact, the probability is provably 99.9%, and the reason is again because of the likelihoods: if door 721 conceals the prize, there’s a 100% likelihood Monty Hall would open every other unselected door except door 721, so P(721 is not opened|door 721 conceals the prize)=100%. But if the door you selected conceals the prize, then there’s a 1/999 likelihood that Monty Hall would open every door except 721 —simply because he had 999 other doors he could not have opened, and so P(721 is not opened|door 358 conceals the prize)=1/999. The point is that this adaptation and others like it show that as we change the number of doors—and consequently the relevant likelihoods—we can see how this affects the probability of the relevant hypotheses. And if we decrease the number of doors enough, we will eventually get to just three doors and the 2/3 probability that door B conceals the prize. Explanation #3: The Mental Simulations Approach A third way to explain the problem is with what I call the mental simulations approach in the recent article. This approach involves running a number of mental simulations of the Monty Hall problem and then proportioning the mental simulations by the relevant probabilities. So suppose we first run, say, 30 mental simulations—that is, we imagine that the Monty Hall problem happens 30 times. And since there is a 1/3 probability that a given door conceals the prize at the beginning, we make it so 1/3 or 10/30 simulations are those where a given door conceals the prize. Then, since the likelihoods mean door C is opened 50% of the time when door A conceals the prize and 100% of the time when door B conceals the prize, we make it so Monty Hall opens door C in 50% and 100% of those simulations respectively. We could depict this as follows: What these mental simulations show is that 10 out of 15 times, or 2 out of 3 times, when door C is opened, it is because door B conceals the prize, so door B conceals the prize with a probability of 2/3.
The experiments further showed that training in the mental simulations approach caused 32% or 16 of 50 participants in one group of the experiment to get the right probabilities for the Monty Hall problem. Of course, this isn’t perfect, but it’s at least better than the other group where none of the participants got the correct probabilities. Explanation #4: Computer Simulations Another explanation of the problem is actually to just simulate the problem many times using a computer. I provide some code and instructions for how you can do this yourself in Appendix C of the paper here. If you run the simulations, you can see that door B will conceal the prize 2 out of 3 times in the original Monty Hall problem and 10 out of 11 times in the new Monty Hall problem. Apparently, it was only after seeing computer simulations like these that the famous mathematician Paul Erdős finally accepted the 2/3 solution to the problem and accepted that his initial 1/2 intuition was incorrect.
0 Comments
Leave a Reply. |
AuthorJohn Wilcox Archives
September 2024
Categories
All
|