How we know what we know
Why do so many people believe things that aren’t true, even when unprecedented access to information and evidence should change their minds? It’s a question I asked myself after a brief encounter with a full-blown COVID conspiracist, who told me that my face mask was unnecessary because the pandemic was a hoax.
While his search for answers led him down a rabbit hole of misinformation, disinformation, and fake news, my light at the end of the tunnel came in the form of fascinating research from Celeste Kidd, a professor of psychology at the University of California at Berkeley. According to her work, people’s access to truth isn’t limited by what or how much information is out there. Instead, it’s governed by the cognitive systems we possess.
Her first major finding is that beliefs are inferences, not records. Because there’s far more information in the world than anyone could ever process, we use shortcuts. That means we form beliefs based on a small subset of available information. We also make at least half a million decisions every day about what to explore, what to click, what to read, what to listen to, what to watch, and who to talk to. How we make these decisions is informed by what we already know and what we believe to be true.
Understanding that our beliefs are (often coarse) approximations based on our experiences in the world leads to the second major finding: beliefs guide interests. Our cognitive systems evolved to handle information overload. And one of the tools we use to do that is boredom. For example, our systems use boredom when we’re confident that we know everything there is to know. This prevents us from ‘hyper fixating’ but it does so at the cost of sometimes shutting the information tap off before we get the right idea.
On the surface, this seems sensible. After all, a rational agent with limited time, attention, and computational power should focus on learning material about which that agent is most uncertain. But there’s a problem. Sometimes people are certain when they shouldn’t be. And this certainty, according to Kidd’s third finding, diminishes interest. More than that, it’s what you think you know that determines your curiosity, not what you actually know. If you think you know the answer to a problem, you won’t be curious to check if you’re right and might get stuck believing what’s wrong.
Sadly, things are even worse than that. Because we don’t have accurate models of our own uncertainty, once we become certain of something, we’re unlikely to consider subsequent evidence. Specifically, if you think you know the answer to something, not only will you not check, but if you’re presented with the right answer, you’ll be more inclined to ignore it. This explains why people can get stuck with stubborn beliefs that aren’t justified given contradictory evidence and why, even if I’d chosen to argue with the COVID conspiracist, it probably wouldn’t have worked.
So, where does human certainty come from? According to Kidd’s research, it appears to be driven by feedback. If you get some idea in your head and then find just a few pieces of confirmatory evidence (like a single YouTube video) you could become certain before encountering any disconfirming evidence. Unfortunately, the abundance of pseudo-scientific material online increases the chances that what you’ll find in the early stages of making up your mind could amplify these problems. But while feedback is problematic, a lack of feedback could be also problematic for different reasons. Specifically, less feedback may encourage overconfidence, causing us to overestimate how many others agree with our particular concept.
All this brings us to the fourth major finding: human beliefs are formed surprisingly quickly. Indeed, new technologies are pushing us towards higher confidence in our beliefs faster than would be the case if we were sampling information from the world in a less algorithmically curated way. If a search yields a string of results that all espouse a similar viewpoint, this could be something that leads people to premature and unjustified certainty. And once that happens, it’s hard to work out an intervention to get them to go back.
Unfortunately, there’s no such thing as a neutral tech platform. Whenever anyone designs a technology that’s going to deliver information of any kind to people, it’s powered by algorithms that chose the order in which that information is presented. This order, as has been shown by many decades of psychology research, makes a big difference in the beliefs people walk away with. Worse, when tech platforms optimise for engagement, they’re trying to manipulate users to stay online for longer. All this is to say that the algorithms pushing content online have profound impacts on what we believe, as individuals and society.
So, how do we design and implement better systems that protect people from losing the opportunity to discover truth in the world? It starts with accepting the fifth and final major finding: inaccurate beliefs are ubiquitous and probably unavoidable. We might like to think of ourselves as rational individuals that don’t believe in things that aren’t based on evidence. But that’s not who we are. We can’t reason about everything in the world from scratch. If we did, we wouldn’t be the hyper specialised species that dominates the world the way we do.
Ultimately, we have to remember that our beliefs form the basis of our decisions. Sure, it may not be a big deal if a teenager walks away from an online search thinking that activated charcoal is more useful than it actually is. But lives are in danger when parents choose not to vaccinate their children or when people deny that climate change is real. Addressing that requires taking the time to consciously question what we know, how we know it, and whether it’s time to search for other views. It’s the only way to bring truth to light.