Skip to main content

How to Keep a Decision Journal: The Practice That Compounds Your Judgment

Decision journal β€” a systematic practice for recording and reviewing decisions to build calibrated judgment and improve decision quality over time

Experience is supposed to be the best teacher. But experience alone β€” without structured reflection β€” is a surprisingly unreliable way to improve judgment. The problem is not a lack of feedback. Most professionals receive plenty of outcome feedback over a career. The problem is that the feedback arrives too late, too ambiguously, and too subject to motivated interpretation to produce genuine calibration. We remember the decisions that worked and attribute them to skill; we explain away the ones that didn't with circumstances beyond our control. Without a written record of what we actually believed and why, before the outcome was known, we cannot distinguish between good decisions and lucky ones β€” or between bad decisions and unlucky ones. A decision journal solves this problem by creating an objective record of reasoning at decision time, making it possible to learn from experience with the precision that experience alone never provides.

Why a Decision Journal Changes Everything

The decision journal is, at its core, a calibration tool. Calibration β€” the alignment between your confidence in your judgments and the actual accuracy of those judgments β€” is one of the most powerful predictors of long-term decision quality. Philip Tetlock's decades of research on expert forecasting found that the most consistently accurate forecasters were those with the best calibration: when they said they were 70% confident, they were right about 70% of the time. When they said 90%, they were right about 90% of the time. This calibration β€” not raw intelligence, domain expertise, or analytical sophistication β€” was the primary differentiator between superforecasters and average experts.

Calibration cannot be improved without feedback that is specific enough to update beliefs. Without a decision journal, the feedback you receive from outcomes is systematically distorted: you misremember what you believed before the outcome, you attribute success to skill and failure to circumstance, and you interpret ambiguous outcomes in ways that confirm your prior beliefs about your own judgment quality. The journal prevents these distortions by creating an objective record that cannot be rewritten in hindsight.

Annie Duke on Decision Records

Former World Series of Poker champion and decision researcher Annie Duke, in her book Thinking in Bets, emphasizes that the only way to separate good decision-making from good luck is to record the reasoning behind decisions before outcomes are known. Without such records, we are subject to "resulting" β€” judging the quality of a decision by the outcome rather than by the quality of the process. Resulting produces exactly the wrong lessons: we learn to repeat processes that produced lucky outcomes and abandon processes that produced unlucky ones, even when the underlying decision quality was the reverse. The decision journal is the primary tool for countering resulting.

Beyond calibration, the decision journal produces three additional benefits that compound over time. First, it forces clarity at decision time: the act of writing out your reasoning reveals gaps, hidden assumptions, and motivated reasoning that remain invisible when thinking is purely internal. Second, it creates an intellectual archive: the reasoning you develop for a difficult decision is often directly applicable to similar decisions years later, but only if it is written down. Third, it builds metacognitive awareness β€” over time, you develop a clearer picture of which types of decisions you make well, which cognitive biases recur in your reasoning, and which domains systematically exceed your competence. This self-knowledge is irreplaceable.

The Problem with Learning from Experience

The assumption that experience automatically produces wisdom is one of the most consequential false beliefs in professional life. Research on expert performance across domains β€” medicine, investing, management, law β€” consistently finds that years of experience do not reliably correlate with better judgment, except in environments with rapid, clear, and unambiguous feedback. Where feedback is delayed, ambiguous, or absent, experience often produces greater confidence without greater accuracy β€” a particularly dangerous combination.

Three specific failure modes prevent experience from producing genuine learning without structured reflection:

Outcome Bias and Resulting

We judge past decisions primarily by their outcomes, not by the quality of the reasoning that produced them. A decision that was poorly reasoned but produced a good outcome (due to luck, favorable conditions, or factors unrelated to the decision) is remembered as a good decision. A decision that was carefully reasoned but produced a bad outcome (due to genuine uncertainty or unfavorable conditions) is remembered as a mistake. Over time, this produces the wrong lessons: we repeat poor processes that got lucky and abandon good processes that got unlucky.

Hindsight Bias

Once an outcome is known, we unconsciously revise our memory of what we believed before the outcome. Events that actually surprised us are remembered as having seemed more predictable. Outcomes we got right are remembered as having seemed more obvious in advance. This hindsight bias makes it nearly impossible to accurately assess the quality of past reasoning without a written contemporaneous record β€” because the memory of what we believed has been overwritten by knowledge of what happened.

Attribution Asymmetry

We attribute success to skill and failure to circumstances. This self-serving bias prevents the extraction of accurate learning from outcomes: good outcomes teach us we are competent, and bad outcomes teach us about external adversity, regardless of whether the decision quality was actually different in the two cases. Written decision records, reviewed against actual outcomes, reveal the degree to which our self-assessments are accurate β€” often surprising us with both domains of genuine competence and persistent blind spots.

What to Record: The Decision Journal Template

The content of a decision journal entry is what determines whether reviewing it later will produce genuine learning or merely confirm existing beliefs. The critical principle is: record reasoning, assumptions, and predictions before the outcome is known. An entry that captures only conclusions provides no basis for learning; an entry that captures the full reasoning structure makes it possible to identify exactly where and why the reasoning succeeded or failed.

Action Steps

How to Review: Extracting the Learning

Recording decisions without reviewing them against outcomes is the most common failure mode in decision journaling. The record alone has limited value β€” it is the comparison between recorded reasoning and actual outcomes that produces calibration. Reviews should happen on two timeframes: individual entry reviews when outcomes are observable, and pattern reviews conducted periodically across all entries.

Individual Entry Review

When enough time has passed to observe meaningful outcomes (typically 3 to 12 months, depending on the decision type), review each entry with a structured set of questions:

  • Did the outcome match my prediction? If not, where did my reasoning go wrong?
  • Which of my key assumptions proved correct? Which were wrong?
  • Was the decision actually good, regardless of the outcome? (Would I make the same decision again with the same information?)
  • What did I know that I underweighted? What did I not know that mattered?
  • What would I do differently in the decision process β€” not the choice, but the process?

Write brief answers to these questions directly in the journal entry. The act of writing forces clarity and prevents the review from becoming a vague reflection that produces no actionable learning.

Pattern Review

Quarterly or annually, review all entries from the period and look for patterns across decisions. This meta-analysis is where the most valuable learning lives:

Look for These Patterns

Domains where your confidence consistently exceeds accuracy (chronic overconfidence)

Cognitive biases that recur across multiple entries (your personal blind spots)

Types of decisions where your reasoning is consistently sound

Assumptions that prove wrong repeatedly (systematic errors in your model of reality)

Emotional states that correlate with poor decision quality

Common Patterns to Diagnose

Consistently optimistic timeline estimates (planning fallacy)

Overweighting recent information vs. base rates (availability heuristic)

Patterns of reversing decisions made under time pressure

Domains where outside input consistently outperforms your solo judgment

Recurring failure to consider specific types of risk

The pattern review is what converts individual lesson-extraction into systematic improvement of your decision process. Each identified pattern points to a specific change in how you approach similar decisions in the future β€” a more conservative timeline estimate, a mandatory outside view check, a red-flag question to ask in a specific type of situation.

Which Decisions Belong in the Journal?

Not every decision warrants a full journal entry. The journal should be selective enough to maintain without becoming burdensome, but comprehensive enough to capture the decisions where learning matters most. A practical threshold: any decision that, if it goes wrong, you will be bothered by for more than a month. This naturally captures medium-to-large stakes decisions while excluding the hundreds of trivial daily choices that would make the journal unmanageable.

Decisions That Belong in the Journal

Hiring and team decisions

Significant financial commitments (investments, major purchases)

Strategic direction changes in projects or career

Partnership and vendor selections

Major time commitments (projects, initiatives, relationships)

Any decision you feel conflicted or uncertain about

Decisions That Usually Don't Need It

Routine operational decisions with clear criteria

Easily reversible low-stakes choices

Decisions within a domain where you have extensive calibrated experience

Time-sensitive decisions requiring immediate action

Decisions where the only meaningful option is obvious

One important nuance: include some medium-stakes decisions even when they feel straightforward, because these are where patterns are most visible. Your most consequential biases are likely to show up not in the decisions that feel hard (where you are more likely to apply deliberate reasoning) but in the decisions that feel easy β€” where a confident, fast System 1 judgment determines the outcome without triggering careful evaluation.

The Most Common Decision Journal Mistakes

Recording Conclusions Instead of Reasoning

The most common failure: writing "I chose X because it seemed best" rather than the actual reasoning. Conclusions without reasoning provide no basis for learning β€” you cannot determine from the entry whether the reasoning was sound or flawed, whether the assumptions were realistic or optimistic, or whether the confidence was calibrated. Every entry should capture the argument for the choice, not just the choice itself.

Never Reviewing Against Outcomes

A journal that is written but never reviewed against actual outcomes is a diary, not a learning system. The entire value of the practice resides in the comparison between recorded predictions and actual results. If entries are not reviewed with outcome data β€” and specific questions about where the reasoning succeeded or failed β€” the journal produces no calibration benefit. Build the review into the system: when you create an entry, schedule a calendar reminder for the review date.

Applying It Only to Catastrophic Decisions

Many people begin decision journals after a significant failure, intending to capture only the most important decisions. This selection bias means the journal captures too few decisions for pattern recognition to work β€” you need volume across similar decision types to identify reliable patterns. Apply the journal to all decisions above the threshold, not just the ones that feel consequential in advance.

Stopping After the Entry, Not Running the Process

The decision journal is most valuable when it is part of a broader decision process β€” not a post-hoc record of decisions already made by other means. Writing the entry should be integrated into the decision-making process itself: writing out your reasoning clarifies it, forces you to state assumptions explicitly, and often reveals gaps before the decision is finalized. The journal entry is both a record and a decision tool.

The Power of Predicted Outcomes

The single most valuable habit in decision journaling is writing specific, observable, time-bound predicted outcomes before committing. 'I expect this to work out' is not a prediction β€” it produces no learning. 'I expect revenue to grow by at least 15% in the six months following this strategy change' is a prediction that can be verified or falsified. The specificity feels uncomfortable at the time of writing, which is exactly the point: the discomfort of committing to a specific prediction is the discomfort of genuine accountability to your reasoning, which is what the journal is designed to create.

Building the Habit: Making It Stick

The decision journal fails most often not because people disagree with its value, but because it never becomes a reliable habit. The activation energy of opening a document, writing a structured entry, and scheduling a review is higher than the activation energy of simply making the decision and moving on. Building the journal into a sustainable practice requires reducing friction and creating reliable triggers.

Use a Simple, Consistent Format

A standardized template that you fill in without thinking about structure β€” the seven elements described earlier in a fixed order β€” reduces the cognitive overhead of each entry to near zero. Any format that requires you to figure out what to write each time will fail under time pressure. Create a template once, use it every time, modify it only when you discover a genuinely better approach through use.

Integrate the Entry Into the Decision Process

The most reliable trigger for writing a journal entry is the decision itself. Rather than treating the entry as a task to complete after deciding, make writing the entry part of how you make the decision: you have not finished making the decision until the entry is written. This re-framing eliminates the common failure mode of intending to write the entry after the meeting or the call, and then never doing so.

Schedule Reviews at Entry Time

When you write each entry, immediately schedule a calendar reminder for the review. The review date should be based on when meaningful outcome data will be observable β€” 3 months, 6 months, 12 months, depending on the decision type. Without a scheduled reminder, reviews depend on you spontaneously remembering to look at past entries β€” which reliably does not happen. Automate the reminder at entry time.

Start Small: Three Entries Before Judging

The journal produces no immediately visible benefit β€” its value is in the review, which comes months later. This makes the early phase, before any entry has been reviewed, the period of highest abandonment risk. Commit to writing three complete entries and one review before evaluating whether the practice is worth continuing. The first review β€” seeing your pre-outcome predictions compared to actual results β€” is almost always sufficiently revealing to make the practice feel obviously valuable.

Conclusion

The decision journal is not a glamorous tool. It does not produce the immediate satisfaction of a framework applied in the moment or a technique that produces visible results in a meeting. Its mechanism is slower and less visible: it creates an objective record of reasoning that, when reviewed against outcomes over months and years, builds the calibration and self-knowledge that constitute genuine wisdom.

Most professionals learn from experience β€” but slowly, noisily, and with significant distortion from hindsight bias, outcome bias, and attribution asymmetry. The decision journal compresses that learning cycle, removes the distortions, and converts ambiguous outcome feedback into specific, actionable intelligence about where and how your judgment succeeds and fails. Over a career, the compounding effect of this systematic calibration is substantial: better judgment in hiring, better judgment in strategy, better judgment in resource allocation, better judgment in the hundreds of medium-stakes decisions that collectively determine most professional outcomes.

The investment required is modest: fifteen minutes per significant decision to write the entry, thirty minutes per quarter to review and identify patterns. The return β€” genuine improvement in the quality of judgment over time, in the domains that matter most β€” is the highest-leverage use of structured reflection available to knowledge workers. Start with the next significant decision you face. Write the entry before the outcome is known. Schedule the review. Then let the compounding begin.