JOHN WILCOX
  • Home
  • Curriculum Vitae
  • Teaching
    • Accuracy of Human Judgment
    • Introduction to Psychology
    • Teaching Methods
    • Philosophy of Science
    • Ethics in a Human Life
    • Epistemology & Probability
    • Logic
    • Applied Research Methods
    • Teaching Evaluations
  • John's Blog

John's Blog

​Comparing the Constitutions of New Zealand and the United States

3/22/2025

0 Comments

 
I have recently been studying the constitutions of the United States and New Zealand. I use  the term “constitution” here to mean the foundational principles upon which their governments are built. 

Each country has quite different constitutions. The United States has its constitution built on the law entitled—quite aptly—the “Constitution of the United States”, arguably alongside the “Declaration of Independence”. In contrast, New Zealand’s constitution is comprised of various statutes, constitutional conventions and other important constitutional sources—salient among which are the Constitution Act 1986, the New Zealand Bill of Rights Act 1990 and the Treaty of Waitangi.

I want to highlight several themes which strike me, starting with the United States’ constitution.

Read More
0 Comments

Killingsworth Re-analysis script

3/14/2025

0 Comments

 
I've just posted a piece in Psychology Today about money and happiness. For interested researchers, here is the data used in the analysis, and here is the script used to do the analysis itself.
0 Comments

What is a “social impact scholar”?

12/2/2024

0 Comments

 
(Reposted from the LSE Impact Blog)

Having a positive impact beyond academia can often be seen as a requirement, rather than as a personal orientation to research and its potential to create social change. 
John E. Wilcox and Brandon Reynante reflect on their experience as social impact scholars and what it means for their research.

Our adult lives have been largely devoted to research, so it makes sense to ask, “Why do we do it?”. Our response to this question may have changed over time, but our current answer is “social impact”—that is, to have a positive impact on society (assuming some understanding of that term).

As we use the term, then, a “social impact scholar” is anyone who undertakes research largely with this objective of social impact in mind. For example, we currently do research on how education can better prepare our societies for severe climate change in the future. We also know many others admirably working on social impact through, for instance, research that aims to improve the justice system or adolescent mental health.

Read More
0 Comments

Why being a JDM scholar is sometimes hard—really hard

9/6/2024

0 Comments

 

TL;DR KEY POINTS

  • Judgment and decision making scholars--or "JDM scholars", for short--study how to improve judgment and decision making in diverse domains
  • Despite their motivation or ability to help others in these domains, life can sometimes be tough for them, and for three reasons:
    1. People sometimes think JDM scholars are useless because they are generalists who others might allege lack "specialization" or "experience" in their own specific domain
    2. People sometimes think JDM scholars are wrong because those people are psychologically programmed to think in intuitively compelling but demonstrably unreliable ways which JDM scholars are trained to avoid (even though JDM are also sometimes wrong too)
    3. People sometimes hate JDM scholars because the JDM scholars think those people are wrong, and sometimes in ways with negative consequences
  • Nevertheless, JDM scholars might make life easier for themselves in a couple of ways:
    1. ​By finding so-called "JDM champions" who have the time, patience and open-minded humility to advocate JDM science in their domain
    2. By aiming to make their communications accessible and tactful—specifically by avoiding criticism and by fostering positivity​

THE BACKGROUND

​My life is great in a lot of ways: I’m lucky to have a wonderful family and many projects and activities which I enjoy, for example. Thankfully,  then, I'm very happy with my life overall. 

But despite that, one thing about my life can at times be very difficult and frustrating: being a judgment and decision making scholar. 

A judgment and decision making scholar—or a “JDM scholar” for short—is someone who professionally studies judgment and decision making: that is, someone who studies how we do, or how we should, make judgments and decisions.  I’m a JDM scholar because I want to improve judgments and decisions, both the ones from myself and those from others in their domains.

And there are many domains where judgment and decision making could be improved: a JDM scholar could reduce false death sentence convictions in law, fatal misdiagnoses in medicine or disastrous policies in politics, to take a few of countless examples. 

But despite both the possibility and promise of improving judgment and decision making in these domains, there are many reasons why this can be difficult or impossible for JDM scholars.

I will explore some of them here, as well as why I think they sometimes stem from assumptions that are specious—that is, superficially plausible but actually wrong.

(A caveat, though: while this blogpost talks of "people", it is not written with any specific "people" in mind--unless otherwise stated.)
​

Read More
0 Comments

The seven requirements of highly accurate Bayesians

8/9/2024

0 Comments

 

TL;DR key points

  • We all form judgments about the world, and we need these to be accurate in order to make good decisions
  • In epistemology and philosophy of science, “Bayesianism” is the dominating theory about how to form rational judgments
  • However, not all of us are always Bayesians, and not all Bayesians are always accurate 
  • This post then articulates seven requirements to be an accurate Bayesian, some of which are widely-known to some (i.e. requirements 1 to 3) while others may be less so (i.e. requirements 4 to 7)
  • The requirements are as follows:
               1. Assign likelihoods to evidence
               2. Assign prior probabilities to the hypotheses
               3. Update using Bayes’ theorem
               4. Use calibrated probabilities
               5. Recognize auxiliary hypotheses
               6. Recognize consilience
               7. Be cautious about fallible heuristics

​THE IMPORTANCE OF BAYESIANISM​

As discussed elsewhere, in many important contexts, we need to form accurate judgments about the world: this is true of medical diagnosis and treatment, of law proceedings, of policy analysis and indeed of a myriad other domains. And as discussed elsewhere, more accurate judgments often means better decisions, including in contexts where they can be a matter of life and death—such as medicine and law.

In analytic epistemology and philosophy of science, “Bayesianism” is the dominant theory of how we should form rational judgments of probability. Additionally, as I discuss elsewhere, Bayesian thinking can help us recognize strong evidence and find the truth in cases where others cannot. 

But there’s ample evidence that humans are not Bayesians, and there’s ample arguments that Bayesians can still end up with inaccurate judgments if they start from the wrong place (i.e. the wrong “priors”).

So, given the importance of accurate judgments and given Bayesianism’s potential to facilitate such accuracy, how can one be an accurate Bayesian?

Here, I argue that there are seven requirements of highly accurate Bayesians (somewhat carrying on the Steven Covey-styled characterization of rationality which I outlined here). Some requirements will be well-known to relevant experts (such as requirements 1 to 3) while others might be less so (such as requirements 4 to 7). In any case, this post is written for both the expert and novice, hoping to say something unfamiliar to both—while the familiar remainder can be easily skipped.

With that caveat, let us consider the first requirement.


Read More
0 Comments

The seven “irrational” habits of highly rational people

8/8/2024

0 Comments

 
(Forthcoming in Psychology Today)
​

TL;DR key points

  • We all make judgments about the world and then use these to make important decisions
  • But if someone made perfectly accurate judgments and sound decisions, would we recognize it?
  • The science and philosophy of judgment and decision-making suggests that the answer is often “no”
  • This post identifies 7 habits of highly rational people—habits which others often label as “irrational”:
               1. Highly rational people are confident in things despite “no good evidence” for them
               2. They are confident in things which are outright false
               3. They countenance the “impossible” and are “paranoid”
               4. They avoid risks that don’t happen
               5. They pursue opportunities that fail
               6. They are often irrational 
               7. They do things that are often “crazy” or “unconventional” 
  • I lastly conclude with some evidence-based suggestions about how we can distinguish the genuinely rational from the irrational:
               1. Measure calibration
               2. Learn norms of reasoning
               3. Think in terms of expected utility theory
  • This might help us to both recognize and make trustworthy judgments and decisions in our lives

THE IMPORTANCE OF RECOGNIZING WHAT'S RATIONAL AND WHAT'S NOT​
​
If someone was as rational as could be—with many accurate and trustworthy judgments about the world, and with sound decisions—would we recognize it? There are reasons to think the answer is “No”. In this piece, I aim to challenge prevailing intuitions about rationality: I will argue that the philosophy and science of judgment and decision-making reveal a number of ways in which what appears to be rational diverges from what actually is rational. 

This piece takes its title from Steven Covey’s well-known book “The Seven Habits of Highly Effective People”. I will argue that, similarly, there are seven habits of highly rational people—but these habits can appear so counter-intuitive that others label them as “irrational”. Of course, the rationality of these habits might be obvious to specialists in judgment and decision-making, but I find they are often not so obvious to others of the sort for whom this piece is written.

In any case, not only are these habits potentially interesting in their own right, but recognizing them may also help to open our minds, to help us better understand the nature of rationality and to better identify the judgments and decisions we should trust—or not trust—in our own lives.

Without further ado, then, I present…

​THE SEVEN "IRRATIONAL" HABITS OF HIGHLY RATIONAL PEOPLE
 
​          1. Highly rational people are confident in things despite “no good evidence” for them

The first habit of highly rational people is that they are sometimes confident in things when others think there is “no good evidence” for them.

Read More
0 Comments

How to recognize good evidence and find the truth where others cannot: Identifying and overcoming “likelihood neglect bias”

8/8/2024

0 Comments

 
(Forthcoming in Psychology Today)
​

​THE TL;DR KEY POINTS

  • As humans, biases often prevent us from forming accurate judgments about the world
  • In a recently published set of experiments, I show how a newly introduced bias can also do this: likelihood neglect bias
  • To illustrate the bias and how to overcome it, I discuss how it arises in versions of the “Monty Hall problem”
  • In some versions of the problem, the evidence can objectively favor a hypothesis with a probability of 91%
  • However, the experiments show untrained participants are oblivious of this, they effectively think the evidence is irrelevant and so they do not recognize objectively strong evidence for the truth when presented with it
  • I then apply these ideas to other more realistic contexts—including medicine, law and mundane situations—to show how they can help us recognize good evidence and find the truth where others may not

THE IMPORTANCE OF RECOGNIZING GOOD EVIDENCE

​We all need to form accurate judgments about the world in many diverse and important contexts. What is the correct diagnosis for someone’s medical condition? Does someone have a crush on you? Did the defendant kill the victim? Here, I will discuss how the evidence can reveal the truth about these questions--and potentially others which you might care about--but only if we think in the right ways.

It’s well-documented that various biases can hinder us in our quest for truth. In a recently published paper in Judgment and Decision Making (freely available here), I introduce a new cognitive bias: likelihood neglect bias. Understanding this bias, and how to overcome it, can help us recognize good evidence and find the truth in numerous cases where others might not.

To show this, though, I’ll use a well-known brain-teaser which reveals this bias—the Monty Hall problem—and then I’ll apply the emerging ideas to show how we can find the truth in other realistic cases—including medicine, law and more mundane topics. You might then want to apply these ideas to other cases which you might care about.

Read More
0 Comments

How can we measure the accuracy of judgments and determine which ones to trust?

8/2/2024

0 Comments

 
(Forthcoming in Psychology Today)
​

​THE TL;DR KEY POINTS

  • I recently published an argument for calibrationism, the idea that judgments about the world are trustworthy only if—among other things—there’s evidence that they are produced in ways that are “well calibrated”
  • A set of judgments are well calibrated just in case they assign, say, 80% probabilities to things which are true 80% of the time, 90% probabilities to things which are true 90% of the time and so on
  • In this post, I provide a tool for measuring calibration, and I share some ideas about what to do and what not to do when measuring calibration, using my own experience as an example 

PRELIMINARY CLARIFICATION: WHAT IS CALIBRATION AND CALIBRATIONISM?
​
​As I mentioned elsewhere, I recently published a paper arguing for calibrationism, the idea that judgments of probability are trustworthy only if there’s evidence they are produced in ways that are calibrated—that is, only if there is evidence that the things one assigns probabilities of, say, 90% to happen approximately 90% of the time.

Below is an example of such evidence; it is a graph which depicts the calibration of a forecaster from the Good Judgment project—user 3559:
Picture
​The graph shows how often the things they assign probabilities to turn out to be true. For example, the top right dot represents all the unique events which they assigned a probability of around 97.5% to before they did or didn’t occur: that Mozambique would experience an onset of insurgency between October 2013 and March 2014, that France would deliver a Mistral-class ship to a particular country before January 1st, 2015 and so on for 17 other events. Now, out of all of these 19 events which they assigned a probability of about 97%, it turns out that about 95% of those events occurred. Likewise, if you look at all the events this person assigned a probability of approximately 0%, it turns out that about 0% of those events occurred. 

However, not all people are like this, below is a particular individual, user 4566, who assigned probabilities of around 97% to things which were true merely 21% of time, such as Chad experiencing insurgency by March 2014 and so on.

Read More
0 Comments

When should we trust the judgments from ourselves or others? The calibrationist answer

8/2/2024

0 Comments

 
(Forthcoming in Psychology Today)
​

​THE TL;DR KEY POINTS

  • We all make judgments of probability and use these to inform our decision-making
  • But it is not obvious which judgments to trust, and bad outcomes occur if we get it wrong—such as fatal misdiagnoses or false death sentence convictions
  • How do we determine which judgments to trust—both from ourselves or others?
  • I recently argued for inclusive calibrationism, which gives a two-part answer to this question
  • The first part says judgments of probability are trustworthy only if there’s evidence they are produced in ways that are "well calibrated"—that is, only if there is evidence that the things one assigns probabilities of, say, 90% to happen approximately 90% of the time
  • The second part says that judgments of probability are trustworthy only if they are also inclusive of all the relevant evidence
  • This blogpost then shares some ideas for implementing calibrationism, such as measuring calibration and creating evidence checklists to figure out how inclusive a judgment is

​THE IMPORTANCE OF TRUSTWORTHY JUDGMENTS

We all make judgments of probability and depend on them for our decision-making.

However, it is not always obvious which judgments to trust, especially since a range of studies suggest these judgments can sometimes be more inaccurate than we might hope or expect. For example, scholars have argued at least 4% of death sentence convictions in the US are false convictions, that tens or even hundreds of thousands of Americans die of misdiagnoses each year and that sometimes experts can be 100% sure of predictions which turn out to be false 19% of the time. So we want trustworthy judgments, or else bad outcomes can occur.

How do, then, can we determine which judgments to trust—either from ourselves or others? In a paper recently published here and freely available here, I argue for an answer called “inclusive calibrationism”—or just “calibrationism” for short. Calibrationism says trustworthiness requires two ingredients—calibration and inclusivity. 
​

Read More
0 Comments

New book, "Human Judgment", is now published

1/2/2023

0 Comments

 

​THE TL;DR KEY POINTS

  • We all make judgments, and our important life decisions depend on them--but how good are those judgments?
  • My new book "Human Judgment: How Accurate is It, and How Can it Get Better?" investigates this topic (it's now available to purchase here)
  • It has two somewhat newsworthy items: one bad, the other good
  • The bad news is that science suggests humans are often much more inaccurate than we might hope or expect--for example, thousands die each year because of misdiagnoses and false death sentence convictions
  • The good news is that science suggests a number of concrete ways to measure and improve the accuracy of our judgments
  • The book then outlines and summarizes those recommendations

We all make countless judgments, and our important life decisions depend on them.  

My new book, “Human Judgment”, investigates these judgments, and it is now available to purchase online here.

The book concerns two topics to do with human judgment, as implied by the subtitle: How accurate is it, and how can it get better?

It has two somewhat newsworthy items, one bad and the other good.

The bad news is that the science suggests that human judgment is often much more inaccurate than we might hope or expect. For example, some researchers estimated as many as 40,000 to 80,000 US citizens will die because of preventable misdiagnoses—and that’s each year. If they are right, that’s a yearly death toll at least 13 times higher than the September 11th terrorist attacks. Unfortunately, medicine is not unique too: judgmental inaccuracy can afflict a number of other areas in society as well. As another example, some researchers estimate at least 4.1% of death sentence convictions in the US are actually false convictions; this implies that some people are trialed, convicted and executed for horrific crimes that they never actually committed. So that is a few of numerous studies painting a less than ideal picture of human judgment: we make inaccurate judgments about medical diagnoses, about criminal convictions and about a number of other areas.

Read More
0 Comments
<<Previous

    Author

    John Wilcox

    Cognitive scientist
    @ Columbia University
    Founder
    @ Alethic Innovations

    Archives

    March 2025
    December 2024
    September 2024
    August 2024
    January 2023
    November 2021
    May 2021
    July 2020

    Categories

    All
    Bayesianism
    Calibrationism
    Education
    Heuristics And Biases
    Judgment Accuracy
    Philosophy Of Science
    Politics And Governance
    Probability And Statistics
    Rationality
    Social Transformation

    RSS Feed

  • Home
  • Curriculum Vitae
  • Teaching
    • Accuracy of Human Judgment
    • Introduction to Psychology
    • Teaching Methods
    • Philosophy of Science
    • Ethics in a Human Life
    • Epistemology & Probability
    • Logic
    • Applied Research Methods
    • Teaching Evaluations
  • John's Blog