Skip to main content

The Origins of Cassi

Cassi’s origins lie back in 2008, when our CEO, Dr Keith Dear, was an Intelligence Officer on his second tour of Afghanistan. The second time he had been deployed, and involved in the leadership targeting campaign against the Taliban. He asked a simple question: how would we know it was working?

The end result of this was Beheading the Hydra, a study that is a proto-Cassi. It works backwards from the Outcome – Taliban and al Qaeda leaders were killed or captured to reduce their organisations’ capability, and affect their psychology – to deter, cause dissension, and divide. The study asks how would we know if these effects were achieved? In the end, on this basis, it found that as strategy, it was counter-productive. Published in 2011, it went on to be the most read article in the Journal of Defence Studies and remains in the top 10 today.

Assessing Assessments

Not longer after, Keith met our Chief Innovation Officer, John Hetherington, both had read Professor Philip Tetlock’s Superforecasting and became vociferous advocates for the approach within UK Defence – if we know this is forecasting best practice (and we do), and we know that post-Butler Report and Chilcot Inquiry that UK Government is committed to using probabilistic methods why were Tetlock’s recommendations not being implemented?

By 2015, frustration with the lack of progress led Keith and John to publish Assessing Assessments – pointing out that the UK terrorist threat level had been at SUBSTANTIAL or SEVERE for 10 years since 2006. Based on the UK Defence ‘Intelligence Yardstick’ this implied a probability of an attack had been 75% or greater every day since 2006. It has remained at the level ever since. Clearly, there have not been attacks on 75% of days since 2006. The problem, in any domain, with forecasts that are detached from reality is that they lead to either:

  1. the forecasts not being taken seriously, as meaningful, and ignored;
  2. the forecasts being gamed, to win resources vs rival areas;
  3. the forecasts being taken seriously and resources therefore allocated inefficiently to over-estimated (as in this terrorist threat example) or under-estimated, risk

Or some combination of all three. Not only that, but if the probabilities aren’t derived from a proper system, they can be too easily politicised – exaggerated for small ‘p’ (organisational) or large ‘P’ (Party) political reasons.

Early Advocates of AI in Defence

Around the same time as Assessing Assessments was published, Cassi’s later founders were all recognising the growing import of big data and AI to decision-making. Keith and our CTO Al Brown were two of the earliest advocates in UK Defence for the centrality of AI to the future of decision-making. Al authored the Ministry of Defence’s Human & Machine Teaming Concept Note in 2018, while Keith Guest Edited RUSI’s Special Edition on AI that year, also contributing two articles – on how AI would drive rigour into decision-making and exploring Russian President Putin’s claim that ‘Whoever Leads in AI will Rule the World’.

More articles and talks followed, with Keith and Al both winning Fellowships to study AI and decision-making at Oxford – Keith as an Experimental Psychologist, Al as an Engineer. Continuing to see the scope for how difficult and troubled human decision-making was in the real world – for Keith, across 12-months as Expert Advisor to Prime Minister, then in business – studying for an EMBA at Cambridge and working within the Global corporate structure at Fujitsu and across industry sectors – for Al & John in more and more senior roles in defence – their determination to employ technology and more rigorous methods to address the shortcomings deepened.

Fujitsu C-CAT

Reunited at Fujitsu, in the Centre for Cognitive and Advanced Technologies (C-CAT), Keith, Al and John worked with a terrific team to push the frontier in ‘cognitive technologies’ – as machines increasingly drew insight or foresight from data faster and more effectively than humans, while at the same time, working to bring the UK, Japan and other allied nations closer together in strategic technological collaboration. Early experiments in LLM forecasting at Fujitsu led to a victory for our C-CAT team’s Artificial Intelligence – ‘Cassie’ – in the Metaculus Forecasting Cup (Q3) 2024. Cassie finished as far ahead of second place as the gap between second and sixth (Al likes to say, like Usain Bolt crossing the line and the also runs trailing in later 😉). Casse was 75% accurate to human’s 50% accurate – so AI performing 50% better than humans across >300 questions over 3-months.

As the C-CAT’s R&D programmes were mainstreamed, and as part of a restructuring within Fujitsu, the team left by mutual agreement with Fujitsu to go full-time with Cassi in March 2025.

Creating Cassi

Seeing decision-making, leading across military operations, in the most demanding environments, at the highest levels of UK Government, national and international diplomacy, and across global business, the Founders at Cassi were determined to take the insights and arguments they had made for improving decision-making – over a lifetime of experience – and improve it for everyone. Cassi, and superstrategy, were born.

Having argued for years that the advent of AGI would be the most profound revolution in human history – and determined to be part of the revolution, Cassi was formed devoted to the continuous improvement of human decision-making, and the development of new model architectures for AGI. To surf the tidal wave of change, and seek to mitigate its consequences for companies, countries, citizens and all our customers.