Expectation Bias
We tend to perceive the world through the lens of what we already expect to find. Before we've evaluated a single piece of evidence, our brain has already generated a prediction, and that prediction quietly shapes what registers as signal, what gets explained away, and what we walk away believing we saw. |
But first, let's hear from this week's sponsor...
SPONSOR
Build professional case studies in hours, not days. AI-powered case study builder with guided templates for UX case studies and presentations. Get 50% off with BEYONDUX at checkout!
|
|
Now, back to the Expectation Bias ⏬
Have you ever walked out of a usability session completely confident in your findings, only to ship something that quietly missed the mark?
Your brain has never seen the world as it actually is.
That's not hyperbole. It's literally how perception works.
Your brain doesn't wait for information to arrive before forming an opinion. It predicts what's coming first, then filters everything through that prediction before you've made a single conscious decision about what the evidence means.
Which means every research session you've ever run, every design review you've ever sat through, every piece of feedback you've ever received...
All of it got filtered before you even decided what to think about it.
Robert Rosenthal figured this out in the early 1960s. He told one group of student researchers their lab rats were genetically superior maze runners. Told another group theirs weren't. The rats were identical.
The results were not.
The students' expectations had quietly changed how they handled, coached, and observed their animals. Same data. Different filter.
Sound familiar?
Here's where it shows up in your work:
⇢ The usability session you confidently walked out of, right before something shipped and quietly missed the mark ⇢ The five-second hesitation you logged as "minor" ⇢ The teammate whose work gets waved through while someone else's gets interrogated at every turn ⇢ The leadership-championed feature that somehow always gets a generous read
The scary part isn't that we see what we want to see.
It's that we don't realize we're doing it.
Would you rather listen? Follow the Cognition Catalog on Spotify!
Here’s something most people don’t realize about how the brain works: it isn’t a camera. It doesn’t passively record what’s in front of it and then form an opinion. It does something more like the opposite.
The brain is constantly generating predictions about what it’s about to experience. When reality matches the prediction, the signal gets dampened and treated as expected, unremarkable. When reality doesn’t match the prediction, you get what neuroscientists call a prediction error: a signal that something worth paying attention to has happened.
This architecture is genuinely useful. It’s why you don’t have to consciously process every detail of a familiar room. The brain has a model, the model is accurate, and cognitive resources get freed up for things that actually need attention.
The trouble starts when the predictions are wrong, and the brain doesn’t flag it.
The formal research on this idea started in the early 1960s with psychologist Robert Rosenthal, who was studying something surprisingly mundane — lab rats navigating mazes. He told one group of student researchers their rats were genetically superior at finding their way through, and another group that theirs were not. The rats were identical. The results weren’t. The students who expected better performance got better performance, not because the rats were different, but because the students were. Their expectations had quietly changed how they handled, coached, and observed the animals.
That finding sent Rosenthal in a direction that would shape his entire career.
He partnered with elementary school principal Lenore Jacobson to test whether the same dynamic played out with people. Teachers were told that a new assessment had flagged certain students as likely to make significant academic gains in the coming year. The students were randomly chosen. The test was fictional. But those students improved more than their peers over the following year, with the strongest effects among the youngest kids. The expectation had become the outcome.
Rosenthal and Jacobson published their findings as Pygmalion in the Classroom in 1968, named for the myth of a sculptor whose belief in his creation was so complete that it came to life. The parallel was intentional: what we believe about someone can quietly make it true.
What Rosenthal demonstrated in labs, we can observe in teams every day.
Expectation Bias is often discussed alongside confirmation bias, and the two are related — but they’re not the same. Confirmation bias describes what you go looking for. Expectation Bias describes what you notice when the evidence is already in front of you.
Together, they form a tight loop. You expect something, you notice what matches, you ignore what doesn’t, and your expectation gets reinforced. Repeat, indefinitely.
The reason this matters for teams specifically is that most team processes assume a certain level of perceptual neutrality. We assume that when we review research, evaluate a design, or assess a project’s progress, we’re working from a shared reality. We’re often not. We’re each working from our own model, shaped by what we expected to find before we walked into the room.
The mechanism behind it isn’t mysterious. Rosenthal found that teachers were changing their behavior in small but consistent ways—offering more encouragement, more patience, and more opportunities to participate—without realizing they were doing so. The expectation didn’t stay internal. It leaked out through every interaction, and the students responded to it. Intention had nothing to do with it.
SPONSOR
Pick up Jeff White's Storytelling Toolkit! Use BEYONDUX at checkout to get 10% off Jeff’s Storytelling course to influence your team and advance your UX career.
|
|
Expectation Bias rarely feels like bias. It feels like experience. It feels like pattern recognition. It feels like knowing your team, knowing the work, knowing what good looks like. And that's exactly what makes it so hard to catch. By the time it's shaping a decision, it's already been mistaken for judgment.
In team dynamics, it often shows up as a label that outlasts its evidence. Once a teammate gets filed away—"the strong one," "the one who overcomplicates things," "the one who's still finding their footing"—that label starts doing the evaluating. A well-regarded engineer's PR gets waved through. A newer designer's concept gets interrogated before it gets considered. The work hasn't changed. The filter has.
It shows up in how work gets reviewed, too. A feature that leadership has championed gets a generous interpretation. Rough edges get noted and moved past. A project that's been quietly doubted gets scrutinized at every turn. Teams present the same quality of progress and get fundamentally different responses, not because the work is different, but because the expectations aren't.
And it shows up in research. A usability session gets conducted with a mental model of how users will respond, and that model shapes what registers as signal. A participant's hesitation barely makes it into the notes. A moment of confusion becomes "they figured it out." The problem isn't that the team is ignoring the evidence. It's that the expectation has already changed what the evidence looks like before anyone decides what to do with it.
The cost is the same regardless of where it shows up: decisions made on distorted observations, people evaluated against fixed labels, and research that reflects what the team predicted rather than what users actually experienced.
🎯 Here are some key takeaways
1️⃣ Notice when you've already made up your mind before the work begins: If the outcome already feels settled before anyone's said a word, that's worth pausing on. Being experienced isn't the same as being objective. Ask yourself what you expect to see and why before you see anything.
2️⃣ Agree on what you're looking for before you start looking: Decide what "good" looks like before the work is in the room. Define success and failure before sessions begin, and agree on the questions the work must answer before it is presented.
3️⃣ Watch how your read on a person shapes your read on their work: Once you've decided someone is strong or weak, that label follows their output around. Ask yourself whether you'd evaluate this differently if you didn't know who made it.
4️⃣ Write your research questions before you write your hypotheses: Most teams do it backward. They form a point of view and design research to test it, which almost guarantees the questions will favor the hypothesis. Draft questions first, from genuine curiosity about what you don't know.
5️⃣ Separate who collects the research from who interprets it: When the people who designed the feature are also running the usability sessions, expectation bias is baked in. They're invested, not neutral. Even a small degree of separation gives your findings a better chance of revealing something you didn't expect.
Explore the full Cognition Catalog
There is much more to explore. Stay tuned for a new bias every Friday!
|
You iterated, pivoted, circle-backed, and made it out of Q4 (barely).
You’ve earned a Merit-ish Badge!
Choose from four sets: • Soft Skills, Sharp Tongue • Cult of Figma Initiation • Pixel Pusher Survivor Kit • Field Notes & Frayed Nerves
|
|
|
|
|
|
They don't teach this stuff in school
Learn the things they left off the syllabus.
|
Also available at these fine retailers
|