I can't wait for the introductory fellowship to begin! Just signed up the other day. In addition to what I wrote about wanting to have an AI safety focus in S-risk research, some other thought-provoking and disturbing questions I thought about (which might help with advancing the research) include: Might prioritizing S-risk lead one to "valorize the void"? (Cf. https://www.goodthoughts.blog/p/dont-valorize-the-void ) Might there be some level/threshold of S-risk (or perhaps even simply consideration of the aggregate of all extant suffering) warrant—and this is really disturbing to me, as an advocate for AI Safety and as a longtermist, but I have to prize the virtue of epistemic open-mindedness and curiosity even if it's disturbing—extinction being a good idea? Or antinatalism? And a side question, a less important question I'm vaguely interested in is what to make of Spinoza and Nietzsche's views regarding suffering and pity vis a vis S-risks
Thanks for your comment and for your interest in the fellowship!
We plan to address the issue of "valorizing the void" in a future post, but in short, I don't think this is a compelling critique of s-risks. One can focus on the reduction of s-risks without taking any stance on "the void".
Regarding extinction: I'd argue that we should be cooperative towards other value systems (https://centerforreducingsuffering.org/research/why-altruists-should-be-cooperative/), and adopt strong norms of non-violence. So even if one were to think that extinction is a "good idea", or a lesser evil compared to potential future suffering, I think it's still much better in practice to focus on other ways to reduce s-risks, e.g. by trying to find common ground and win-win solutions.
I can't wait for the introductory fellowship to begin! Just signed up the other day. In addition to what I wrote about wanting to have an AI safety focus in S-risk research, some other thought-provoking and disturbing questions I thought about (which might help with advancing the research) include: Might prioritizing S-risk lead one to "valorize the void"? (Cf. https://www.goodthoughts.blog/p/dont-valorize-the-void ) Might there be some level/threshold of S-risk (or perhaps even simply consideration of the aggregate of all extant suffering) warrant—and this is really disturbing to me, as an advocate for AI Safety and as a longtermist, but I have to prize the virtue of epistemic open-mindedness and curiosity even if it's disturbing—extinction being a good idea? Or antinatalism? And a side question, a less important question I'm vaguely interested in is what to make of Spinoza and Nietzsche's views regarding suffering and pity vis a vis S-risks
Thanks for your comment and for your interest in the fellowship!
We plan to address the issue of "valorizing the void" in a future post, but in short, I don't think this is a compelling critique of s-risks. One can focus on the reduction of s-risks without taking any stance on "the void".
Regarding extinction: I'd argue that we should be cooperative towards other value systems (https://centerforreducingsuffering.org/research/why-altruists-should-be-cooperative/), and adopt strong norms of non-violence. So even if one were to think that extinction is a "good idea", or a lesser evil compared to potential future suffering, I think it's still much better in practice to focus on other ways to reduce s-risks, e.g. by trying to find common ground and win-win solutions.