Facing Up to the Worst
Most efforts to reduce suffering focus on helping individuals alive today. This is understandable, given the tragedies that currently exist. Many millions of people endure immense suffering due to poverty, wars, disasters, chronic pain, depression, and so on. On an even greater scale, tens of billions of animals live and die miserably in factory farms built to facilitate their mass exploitation and abuse. Beyond human civilisation, there are billions upon billions of wild animals who suffer serious harms.
But what about the possibility of a future moral catastrophe? What if such tragedies could potentially take place on an even larger scale? And what can we do now to prevent that from happening?
The concept of s-risks, short for “suffering risks” or “risks of astronomical suffering”, was developed to address these questions. S-risks are scenarios that involve severe suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth.
Such worst-case scenarios could come about in many ways. For example:
An oppressive global dictatorship could entrench itself, leaving no possibility of escape or reform.
Malevolent actors could employ advanced technologies with sadistic intentions, causing large-scale suffering.
Escalating competition between powerful artificial intelligence systems could lead to attempts at coercion or retaliation, potentially resulting in catastrophic outcomes.
Artificial sentient beings could emerge and be exploited or abused on an enormous scale.
However, not every dystopian outcome qualifies as an s-risk. This is because the above definition also requires that the resulting suffering is astronomical in scale (e.g., if it is spread to other planets due to space colonisation).
Many will naturally be skeptical of research on s-risks. S-risks are, after all, speculative as of yet. But we are not justified in simply dismissing s-risks as weird, esoteric, or “crazy science fiction”. On the contrary, plausible empirical and normative premises suggest that focusing on the reduction of s-risks is reasonable. I make this argument in detail in my book, Avoiding the Worst: How to Prevent a Moral Catastrophe, which is freely available as an ebook and as an audiobook. At the very least, I believe that it is extremely important to create a space for serious reflection on this topic.
To that end, we at CRS have recently announced an introductory fellowship on s-risks. This six-week programme is designed to introduce participants to the core ideas of s-risks and build a stronger community of people working on effective suffering reduction.
Through the programme, we will aim to provide a supportive environment as we seek to face up to the disturbing reality of how much severe suffering there is, and how much more there could be in the future. The ultimate purpose is not to simply think or talk about s-risks, but to identify (if possible) practical actions and interventions to reduce s-risks.
We also appreciate sceptical perspectives. For example, you could conclude that concerns about s-risks are a distraction from the urgency of alleviating ongoing suffering, or that we are clueless when it comes to influencing the long-term future. There are many open questions: What exactly can be done at this point to reduce s-risks? Do some of the more speculative scenarios come close to Pascal’s mugging, relying on tiny probabilities of fantastical outcomes? Should we expect most future suffering to result from the most extreme outcomes, or from a broader distribution?
It is perfectly legitimate if fellowship participants conclude that s-risks are too unlikely or too intractable to be considered a top priority. I would argue, however, that one can only arrive at that conclusion after careful consideration of different pathways to worst-case outcomes and possible countermeasures. In short, we conceive of the fellowship as an invitation to open-minded inquiry, not as a dogmatic bootcamp.
If you are interested, we invite you to apply as a facilitator before December 21, 2025 and/or as a fellow before January 3, 2026. The fellowship will start in early February 2026 and run for six weeks. It will be held online.


I can't wait for the introductory fellowship to begin! Just signed up the other day. In addition to what I wrote about wanting to have an AI safety focus in S-risk research, some other thought-provoking and disturbing questions I thought about (which might help with advancing the research) include: Might prioritizing S-risk lead one to "valorize the void"? (Cf. https://www.goodthoughts.blog/p/dont-valorize-the-void ) Might there be some level/threshold of S-risk (or perhaps even simply consideration of the aggregate of all extant suffering) warrant—and this is really disturbing to me, as an advocate for AI Safety and as a longtermist, but I have to prize the virtue of epistemic open-mindedness and curiosity even if it's disturbing—extinction being a good idea? Or antinatalism? And a side question, a less important question I'm vaguely interested in is what to make of Spinoza and Nietzsche's views regarding suffering and pity vis a vis S-risks