Introduction to Superforecasting
Article
Published 5th February, 2026
by Al Brown
Superforecasting is the disciplined practice of making probabilistic predictions that are measurably more accurate than typical expert judgement. It is not clairvoyance, and it is not punditry. It is a set of methods, tested in large forecasting tournaments, that improves how individuals and groups think under uncertainty, update with new evidence, and communicate risk in a way that decisions can actually use.
At Cassi, we build on that scientific foundation. We take the techniques used by elite human forecasters and translate them into forecasting and strategy systems: repeatable workflows that generate calibrated probabilities, show their assumptions, learn from outcomes, and improve over time.
What superforecasting is
Most important decisions involve uncertainty: market shifts, competitor moves, operational risk, policy change, programme delivery, geopolitical shocks. Traditional planning often hides uncertainty behind confident narratives or single-point estimates. Superforecasting does the opposite: it makes uncertainty explicit and quantifies it.
In the research literature, superforecasting is strongly associated with:
- Resolvable questions – clear outcomes, deadlines, and objective resolution sources; “How would I know if I was right? What would have to happen for the forecast to be proven true or false?”
- Probability forecasts – not “will/won’t” blank certainties. If your prediction is not an assessment based on evidence, but an assertion grounded in your sense of self you may find it impossible to change your mind.
- Scoring and feedback – so accuracy is measurable, learnable, and improvable. “When the facts change, I change my mind. What do you do?”
- Aggregation – because well-designed groups beat most individuals
These methods were developed and validated in forecasting tournaments at scale, including IARPA’s ACE programme and the Good Judgment Project.
How we know it works
Superforecasting is not a vibe. It’s measurable.
Forecast accuracy is commonly evaluated using proper scoring rules such as the Brier score, which rewards forecasts that are both well-calibrated (probabilities match reality over time) and discriminating (higher probabilities assigned to events that occur). Forecasting tournaments create the conditions for science: many forecasters, many questions, clear resolution criteria, and consistent scoring.
Research from these tournaments shows that accuracy improves with:
- Training in good probabilistic habits – even brief training can yield meaningful gains
- Practice with feedback
- Structured teamwork and aggregation
- Track-record-based weighting – giving more influence to those who consistently reduce error
The result: forecasting becomes an improvable capability; more like an instrument you can tune than a talent you either “have” or “don’t.”
The core techniques elite forecasters use
The best forecasters are not just “smart”. They work differently. Across the literature and tournament findings, a repeatable toolkit emerges:
- Start with base rates (the outside view) – Before diving into details, strong forecasters anchor on reference classes: similar situations and historical frequencies. This reduces base-rate neglect and prevents “50/50 by default” thinking.
- Decompose the question – Complex outcomes are broken into smaller, forecastable components- drivers, prerequisites, and thresholds – so uncertainty can be located rather than hand-waved.
- Think in probabilities, not certainties – Superforecasters use fine-grained probability estimates and treat them as provisional. They remain willing to be wrong, and willing to update.
- Update frequently and proportionally to evidence – New information changes beliefs – sometimes a little, sometimes a lot – but not in an all-or-nothing way. This is where disciplined Bayesian-style updating shows up in practice.
- Use adversarial curiosity – Good forecasters actively seek disconfirming evidence, alternative hypotheses, and “what would change my mind?” triggers.
- Learn from outcomes – Proper scoring and post-mortems create a feedback loop that systematically improves judgement over time.
How Cassi turns human methods into forecasting bots
Cassi takes these human techniques and implements them as machine-augmented workflows. The goal isn’t to imitate human personality; it’s to reproduce the behaviours that reduce forecast error – at speed, at scale, and with auditability.
Our systems are designed to:
- Force clarity upfront: define the outcome, time horizon, and resolution source before any forecasting begins.
- Construct an outside view: pull relevant base rates where they exist (including public forecasting markets and platforms when appropriate), then adjust using case-specific evidence.
- Decompose and model drivers: identify the factors most likely to move the odds, so the forecast is actionable—not just descriptive.
- Run multiple, independent lines of analysis: reduce single-model blind spots by using structured disagreement and comparison.
- Aggregate with track-record weighting: combine signals using performance-based weighting rather than “one person, one vote”.
- Maintain an update trail: forecasts are living objects—when evidence changes, probabilities change, and the system keeps a record of why.
This is also why Cassi cares about benchmarking and transparent evaluation. Something you can see for yourself on our Benchmarking page.
Collective intelligence: why groups can outperform individuals
The science of forecasting repeatedly finds that well-constructed groups outperform most lone experts – especially when the system is designed to harvest diversity without collapsing into noise.
Collective intelligence works when you get three things right:
- Diversity of perspectives and information surfaces: Different people and different research paths notice different signals: domain-specific data, weak indicators, alternative explanations, on-the-ground context, contrarian sources. This increases coverage of the decision landscape.
- Independence before aggregation: If everyone converges too early, you don’t get wisdom – you get herding. Good systems preserve independent estimates and only then aggregate.
- Smart aggregation (including weighting): A simple average often helps, but forecasting research also supports weighted aggregation – giving more influence to forecasters (human or AI) who reliably improve accuracy—and using structured methods that improve the signal-to-noise ratio.
Teams become more coherent, more calibrated, and more accountable when forecasting is structured, scored, and continuously improved. Cassi is designed with Collective Intelligence as a core feature, not just an ‘add on’ to bot forecasting.
From forecasting to strategy
Forecasts matter because they change choices.
Strategy becomes more effective when it is treated as a probabilistic system:
- Decisions are evaluated by expected value, not rhetoric.
- Plans are linked to drivers that move the odds, not just milestones.
- Uncertainty is priced, monitored, and updated – turning strategy into a living risk model rather than a static slide deck.
Superforecasting provides the scientific bedrock. Cassi’s work is to make it operational: fast enough for real organisations, rigorous enough for high-stakes decisions, and transparent enough to trust.