Peter Zhang



Noise, the new book by Daniel Kahneman, Oliver Sibony, and Cass Sunstein, examines the sources and impacts of inconsistent decision making. An insightful and original book, the authors invite us to look beyond bias and consider a more subtler source of unfairness and inefficiency. It provides practical advice for improving our decision-making.

Here’s the paperback and audiobook.

Utility: ★★★★★ (5/5)

Writing: ★★★★✰ (4/5)

This book is the lovechild of all three authors. It incorporates Kahneman’s writing style and psychological insights, Sibony’s practical experience in business, and Sunstein’s broad commentary on society. Their contributions are original, thorough, and imminently useful. The diagrams and examples help to drive home the model at the core of the book.

The book does a less-than-spectacular job at rebutting criticisms of noise-reduction strategies. In response to criticisms about bias and stifling innovation, the authors muster the meager replies of “humans are biased too” and “there’s still some room for creativity.” Algorithmic bias does seem to be less malleable if implemented in, say, the courtroom. Even in mundane settings like an insurance company, flexibility may allow underwriters to pick up on nuanced signals.

I admire the appendix. It provides instructions for conducting noise audits, a checklist for decision observers, and a guide to correcting predictions. Contrary to Kahneman’s Thinking Fast and Slow, this book explicitly believes that changes to reduce noise can and should be made.



A judgement is a measurement made by a human, like “the Lakers will the championship” or “there are 50 beans in that jar.” There are many judges beyond the courtroom, including coaches, doctors, lawyers, underwriters, and more. Predictive judgements are verifiable; others are fictitious, long-term, or intangible, rendering verification impossible. “Matters of judgement” allow for bounded disagreement - we may disagree at the margins, but we should agree on the extremes - it’s not a “matter of taste.”


Bias is the average error of a set of judgements. Noise is variable in judgements that should be identical. System noise refers to noise across organizations where interchangeable judges make decisions. Consider doctors diagnosing diseases, judges handing down sentences, or underwriters in an insurance company. Bias and noise equally contribute to overall error (as measured by mean-squared-error). Noise can introduce unfairness and inefficiency in fields ranging from business, forensics, forecasting, to politics.

Bias can’t be measured if the true value is unknowable, but noise in a system can be measured via a noise audit, a controlled experiment that measures variability among and within professionals. Level noise refers to variability of the average judgements between individuals (e.g. a tough judge that gives longer sentences). Pattern noise refers to variability in the rankings of judgements between individuals; it in turn is comprised of stable pattern noise (e.g. a judge that consistently disfavors minorities) and occasion noise (e.g. a judge in a good mood after lunch).


Objective ignorance refers to intrinsic limitations on what we can know: we can’t predict the weather 30 years from now. People underestimate objective ignorance and generate an internal signal - a reward for spinning a coherent narrative. People are surprised that they are overconfident and that simple algorithms perform better. Psychological biases can also contribute to noise, e.g. in overweighting first impressions.

Bias is much more prominent and easy to explain. Professionals can easily overlook noise or be overconfident in their own judgement. The result is a systemic failure to address noise.


A third-party decision observer can correct biases in real time. For reducing noise, people should use decision hygiene, a reference to the invisible danger of noise. Best practices:

  1. Prioritize accuracy. Judgements are not about personal preferences. While algorithms may not be necessary, they can help constrain the role of discretion.
  2. Take the outside view. Look at base rates for similar cases; show some humility.
  3. Decompose judgements. The mediating assessments protocol splits a broader judgement (“how good is this candidate?”) into smaller, fact-based assessments (“how well do they communicate? did they go to college?”).
  4. Aggregate independent judges. Aggregation is a powerful method for drawing on diverse perspectives and mitigating noise. But, if judgements aren’t made independently, bias can cascade throughout the system.
  5. Favor relativity. Rankings are less noisy than vague scales.

Built with Jekyll on the Swiss theme.