(Forthcoming in Psychology Today) TL;DR key points
2. Assign prior probabilities to the hypotheses 3. Update using Bayes’ theorem 4. Use calibrated probabilities 5. Recognize auxiliary hypotheses 6. Recognize consilience 7. Be cautious about fallible heuristics THE IMPORTANCE OF BAYESIANISM As discussed elsewhere, in many important contexts, we need to form accurate judgments about the world: this is true of medical diagnosis and treatment, of law proceedings, of policy analysis and indeed of a myriad other domains. And as discussed elsewhere, more accurate judgments often means better decisions, including in contexts where they can be a matter of life and death—such as medicine and law. In analytic epistemology and philosophy of science, “Bayesianism” is the dominant theory of how we should form rational judgments of probability. Additionally, as I discuss elsewhere, Bayesian thinking can help us recognize strong evidence and find the truth in cases where others cannot. But there’s ample evidence that humans are not Bayesians, and there’s ample arguments that Bayesians can still end up with inaccurate judgments if they start from the wrong place (i.e. the wrong “priors”). So, given the importance of accurate judgments and given Bayesianism’s potential to facilitate such accuracy, how can one be an accurate Bayesian? Here, I argue that there are seven requirements of highly accurate Bayesians (somewhat carrying on the Steven Covey-styled characterization of rationality which I outlined here). Some requirements will be well-known to relevant experts (such as requirements 1 to 3) while others might be less so (such as requirements 4 to 7). In any case, this post is written for both the expert and novice, hoping to say something unfamiliar to both—while the familiar remainder can be easily skipped. With that caveat, let us consider the first requirement. THE SEVEN REQUIREMENTS OF HIGHLY ACCURATE BAYESIANS Requirement #1: Assigning Likelihoods Bayesianism requires us to consider the likelihood of the evidence we have given various hypotheses. Here, a “likelihood” is a technical term that refers to the probability of the evidence assuming the truth of some hypothesis that we are uncertain about. Examples include the likelihood of an improvement in your symptoms given the uncertain hypothesis that your medication is effective, the likelihood someone would smile at you given the uncertain hypothesis that they have a crush on you, or the likelihood someone’s fingerprints would be at the scene of the crime given the uncertain hypothesis that they are guilty. In another blogpost, I discuss evidence humans sometimes don’t do this: they “neglect” the likelihoods in ways which make them oblivious of even strong evidence when they see it. For example, in one experiment, participants are aware that the evidence is 10 times more likely if one hypothesis is true compared to another, but when they receive that evidence, they assign the hypotheses equal probabilities instead assigning the correct probability of 91% to one of them. There, I also discuss how this evidence about likelihoods can arise in more realistic and natural settings too. Requirement #2: Assigning Priors The second requirement is to assign prior probabilities to hypotheses, especially since sometimes multiple implausible hypotheses can make the evidence likely. For example, suppose you are at home, seemingly alone, and you hear the door slam. This could be quite likely if there is an intruder in your house, in which case it should somewhat raise your probability that there is such an intruder in the house. However, if there is a sufficiently low prior probability of an intruder, and if other explanations have a higher prior probability—like the wind slamming the door shut—then the evidence won’t necessarily make the intruder hypotheses probable all things considered, even though it somewhat raises its probability. This illustrates that when we consider the likelihood of the evidence, we also need to consider the prior probability of the relevant explanations in order to know just how confident we can be in those explanations—as I do in the three "realistic" examples here. Requirement #3: Updating Using Bayes’ Theorem But Bayesianism tells us not just to consider likelihoods and to assign priors, but also to update our probabilities in specific ways once we receive our evidence. This often requires us to use Bayes’ theorem, since if the priors and likelihoods are accurate, then the only accurate posterior probability is the one prescribed by Bayes' theorem. Updating with Bayes' theorem can be complicated, but I give some examples of how this can be done here, as well as a calculator here to easily facilitate these calculations. Requirement #4: Using Calibrated Probabilities So Bayesianism requires us to assign initial probabilities—including prior probabilities for hypotheses and likelihoods for evidence—and to then update our probabilities when we receive that evidence. However, we also need to assign initial probabilities that are in some sense “correct”—an issue relating to the notorious “problem of the priors”. After all, if we assign likelihoods and prior probabilities, and these turn out to be inaccurate, then the outputs of the Bayes’ theorem will also be inaccurate. Elsewhere, I argue for a particular solution to this problem called calibrationism. Calibrationism basically says we should trust probabilities--including the input probabilities in Bayesian calculations--only if we have evidence that the ways we have assigned them are "well calibrated". Probabilities are well calibrated when they correspond to the objective frequency with which things are true; for example, here I discuss examples of forecasters who assign 97% probabilities to events which happen about 95% of the time—even events that appear “unique” and that might have looked like we couldn’t assign numerically precise probabilities to. It is in principle easy to get this evidence about how calibrated we are; I explain how to do so in this other post here. The implication is that if we have evidence that we assign calibrated probabilities, and we input these probabilities into Bayesian calculations, then we can likewise trust the outputs of these calculations, even in cases where the calculations instruct us to be highly confident or certain in sometimes counter-intuitive ways (like with the new Monty Hall problem here). We can also verify the trustworthiness of outputs from Bayesian calculations in the same calibrationist way too: that is, by seeing whether the resulting probabilities similarly correspond to the frequency with which things are true. (And we can mathematically prove that they will be under certain assumptions.) There are various well-supported ways to get more accurate and calibrated probabilities, as I discuss here. One is to use statistics or frequencies where relevant: for example, in assigning prior probabilities about whether the wind or an intruder caused the door to slam, we could consider statistics about how often you or others have experienced intruders, how often the wind slams doors shut in your house and so on. (Of course, we need not meticulously record these statistics, but even our experience can roughly tell us something about what the statistics would be, such as wind slamming the door every few days when the windows are open.) Requirement #5: Recognizing Auxiliary Hypotheses and Successful Accommodation But being an accurate Bayesian also requires us to understand so-called “auxiliary hypotheses” and the role they play in trying to accommodate evidence. Auxiliary hypotheses are hypotheses that are distinct from the central ones we care about but which nevertheless may have implications for how we interpret the evidence. For example, here I use an example where there is a low likelihood of a person sitting at the table in front of you if they did not have a crush on you. To make this likelihood appear higher, however, you might appeal to the auxiliary hypothesis that, say, that particular table just happens to be their favorite table. Now, we could suppose that this likelihood is higher: if the person does not have a crush on you and that table is their favorite table, then there is a 100% likelihood they would sit at that table. In this case, one might argue the evidence isn’t so strong when they can appeal to an auxiliary hypothesis like this. In another article, I discuss in detail how to make sense of attempts to accommodate evidence with auxiliary hypotheses like this. However, the main message is that an attempt like this would fail if the initial probability of the auxiliary hypothesis is sufficiently low and it will succeed if the initial probability of the auxiliary hypothesis is sufficiently high. For example, if we have no independent evidence or good reason to think that that particular table is their favorite table out of the 64 other tables, then we can assign it a 1/65 initial probability. And if that is the case, then this accommodation attempt fails because it attempts to raise the likelihood of the evidence only by appealing to an improbable auxiliary hypothesis. We can prove this using a theorem from that paper called the theorem of successful accommodation. The theorem basically says that an auxiliary hypothesis (symbolized with a) and central hypothesis (symbolized with h1) accommodate some evidence (symbolized with e) as successfully as—or more successfully than—some alternative hypothesis h2 just in case: In words, this says that the accommodation attempt is successful just in case the likelihood of the evidence given h2 is less than or equal to the result of multiplying the initial probability of the auxiliary hypothesis (supposing h1 is true) and the likelihood of the evidence given h1 and the auxiliary a. In this case, we could suppose the evidence e is that the person sits at the table in front of you, P(e|h2) is the likelihood they would sit at that table (denoted with e) if they have a crush on you (denoted with h2), P(e|h1&a) is the likelihood they would sit at that table if they did not have a crush on you (denoted with h1) but that table was still their favorite table (denoted with a) and P(a|h1) is the probability that that table is their favorite table (assuming they do not have a crush on you). And if we suppose that we have no reason to think that table is their favorite table, then P(a|h1) is low, we could plug in some probabilities as below and the evidence strongly favors the hypothesis that they have a crush on you: In this way, we could use these theorems and principles to determine when appealing to auxiliary hypotheses successfully accommodates the evidence in this and countless other cases. Requirement #6: Recognizing Consilience Auxiliary hypotheses play an especially important role when a central hypothesis is potentially supported by multiple kinds of evidence—in which case we say the central hypothesis “consiliates” the evidence. I also discuss consilience in detail that article. There, a main take away is that sometimes the evidence will strongly support a hypothesis which explains multiple kinds of evidence if the probability of the alternative hypotheses are sufficiently low—even if each alternative hypothesis seems potentially plausible when considered in isolation. I use one example to illustrate this: Darwin’s arguments in favor of evolution over special creationism (the idea that God created each species in a separate creative act). Darwin thought evolution was probable because it could explain diverse evidence:
However, Darwin noted that the special creationist could in principle offer another explanation for each piece of evidence: that God simply desired it to be that way—that God simply desired the similarities in webbed feet among specific birds, the similarities in bone structures among some animals, and the similarities among some insects—even though he created them all separately. As I discuss in the article, the problem with the creationist explanation is that, for each piece of evidence, it appeals to an auxiliary assumption about God’s desires that—while potentially plausible in isolation—is still less than certain. And combining uncertainty with uncertainty results in even more uncertainty. We can use an analogy to illustrate the point: if each auxiliary hypothesis has the probability of a coin flip landing heads—50%—then the probability they are all true is much lower, even though they each might seem plausible or possible in isolation. For example, the probability of three coin-flips landing heads is 50%×50%×50%=12.5%, and one can be 87.5% confident that not all coin flips landed heads. Appealing to independent multiple auxiliary hypotheses to accommodate evidence likewise faces the same problem. If those auxiliary hypotheses are not sufficiently supported, then the appeal will fail, regardless of whether it’s auxiliaries about whether creationism is true, or about whether someone does not have a crush on you, or about something else—even when those auxiliary hypotheses might seem plausible in isolation. And when this is the case, then the evidence can favor a consiliating alternative explanation—like evolution—that explains multiple kinds of evidence. Requirement #7: Caution about Pervasive but Fallible Heuristics Above, I’ve discussed many ideas about how to assign probabilities, but there’s ample evidence that we often assign probabilities using so-called heuristics—that is, efficient but often sub-optimal thinking processes. And while sometimes useful, they can cause us to assign inaccurate probabilities. Below, then, are a set of heuristics to be cautious of:
Consequently, we should be careful in thinking something is probable or improbable merely because we can or can’t easily call relevant instances to mind (as per availability) or because it resembles typical instances (as per representativeness). We also need to avoid thinking hypotheses are equally probable merely because they are consistent with the evidence.
Instead, if we are cautious about these heuristics and meet the other six requirements, then I think we will be highly accurate Bayesians.
0 Comments
Leave a Reply. |
AuthorJohn Wilcox Archives
August 2024
Categories
All
|