All research
paper

Judgment under Uncertainty: Heuristics and Biases

Tversky, Amos, & Kahneman, Daniel · 1974

  • rs-0008
  • paper
  • decision
  • alignment
  • verified ✓
Citation (APA)

Tversky, Amos, & Kahneman, Daniel (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124-1131. https://doi.org/10.1126/science.185.4157.1124

Summary

In this landmark 1974 Science paper, Tversky and Kahneman identify three cognitive heuristics that systematically distort human judgment under uncertainty in predictable, quantifiable ways. Representativeness leads people to judge likelihood by how closely something resembles a prototype rather than by base rate. Availability causes people to estimate frequency by how easily examples come to mind, biasing toward the memorable and recent. Anchoring causes initial reference points to exert disproportionate influence on final estimates even when the anchor is arbitrary. Each heuristic produces characteristic errors that are robust across contexts, resistant to simple correction, and present even in experts.

Why it matters

People misread their own situations in predictable ways. Understanding these patterns helps the framework anticipate where self-assessment goes wrong.

How we apply it

The Alignment domain and the On Witnessing book apply heuristics research directly to the practitioner's diagnostic process and the 'cost of misreading' mechanism. Representativeness bias explains how a practitioner who has seen many Creative-mode presentations will categorize a new client's scattered behavior as Creative even when the engagement evidence is ambiguous — resemblance to a prototype overrides careful observation. Availability bias explains why recent, emotionally salient cases distort ongoing diagnostic judgment about new clients who superficially resemble them. Anchoring explains why the first label applied to a person — 'strong performer,' 'resistant,' 'not leadership material' — persists even when later evidence contradicts it. The On Witnessing practitioner protocol is specifically designed to interrupt these distortion patterns at the moment they form, before they solidify into false conclusions that shape the entire engagement.

Limitations

Tversky and Kahneman's research was conducted primarily through laboratory experiments using abstract probability tasks, word problems, and hypothetical scenarios with no domain expertise at stake. The transfer from laboratory bias demonstration to naturalistic professional judgment in high-stakes practice settings is contested: domain expertise, feedback richness, and deliberate calibration can reduce or amplify bias effects in ways the original controlled studies did not model. The framework uses heuristics research as a map of predictable failure modes in practitioner reasoning, not as a deterministic account of how any specific practitioner will err in any specific case — the biases are treated as tendencies to interrupt, not as inevitable cognitive laws.

Cited in 0 claims

No claims cite this entry yet.

Curated by Multiple Natures International · multiplenatures.com/research