So please, remember: there are a very wide variety of ways to care about making sure that advanced AIs don't kill everyone. Fundamentalist Christians can care about this; deep ecologists can care about this; solipsists can care about this; people who have no interest in philosophy at all can care about this. Indeed, in many respects, these essays aren't centrally about AI risk in the sense of "let's make sure that the AIs don't kill everyone" (i.e., "AInotkilleveryoneism") โ rather, they're about a set of broader questions about otherness and control that arise in the context of trying to ensure that the future goes well more generally. from
Otherness and control in the age of AGI by Joe Carlsmith
The first essay, "
Gentleness and the artificial Other," discusses the possibility of "gentleness" towards various non-human Others โ for example, animals, aliens, and AI systems.
The second essay, "
Deep atheism and AI risk," discusses what I call "deep atheism" โ a fundamental mistrust both towards Nature, and towards "bare intelligence."
The third essay, "
When 'yang' goes wrong," expands on this concern. In particular: it discusses the sense in which deep atheism can prompt an aspiration to exert extreme levels of control over the universe.
The fourth essay, "
Does AI risk 'other' the AIs?", examines Robin Hanson's critique of the AI risk discourse โ and in particular, his accusation that this discourse "others" the AIs, and seeks too much control over the values that steer the future.
The fifth essay, "
An even deeper atheism," argues that this discomfort should deepen yet further when we bring some other Yudkowskian philosophical vibes into view โ in particular, vibes related to the "fragility of value," "extremal Goodhart," and "the tails come apart."
The sixth essay, "
Being nicer than Clippy," tries to draw on this guidance. In particular, it tries to point at the distinction between a paradigmatically "paperclip-y" way of being, and some broad and hazily-defined set of alternatives that I group under the label "niceness/liberalism/boundaries."
The seventh essay, "
On the abolition of man," examines another version of that concern: namely, C.S. Lewis's argument (in his book The Abolition of Man) that attempts by moral anti-realists to influence the values of future people must necessarily be "tyrannical."
The eighth essay, "
On green," examines a philosophical vibe that I (following others) call "green," and which I think contrasts in interesting ways with "deep atheism."
The ninth essay, "
On attunement," continues the project of the previous essay, but with a focus on what I call "green-according-to-blue," on which green is centrally about making sure that we act with enough knowledge.
Related:
Why general artificial intelligence will not be realized [Nature]
Previously:
posting such things on an Internet forum could cause incalculable harm