Writing

There Are No Coherence Theorems

AI Alignment Forum, 2023

HTML

Abstract:

Authors in the AI safety community often make the following claim: advanced artificial agents will behave like expected utility maximisers, because doing so is the only way to avoid pursuing dominated strategies. I argue that the claim is false.

Summary: The Case for Strong Longtermism

Global Priorities Institue, 2022

HTML

Abstract:

In this paper, Hilary Greaves and William MacAskill make the case for strong longtermism: the view that the most important feature of our actions today is their impact on the far future. They claim that strong longtermism is of the utmost significance: that if the view were widely adopted, much of what we prioritise would change.

Summary: The Epistemic Challenge to Longtermism

Global Priorities Institue, 2022

HTML

Abstract:

According to longtermism, what we should do mainly depends on how our actions might affect the long-term future. This claim faces a challenge: the course of the long-term future is difficult to predict, and the effects of our actions on the long-term future might be so unpredictable as to make longtermism false.  In 'The Epistemic Challenge to Longtermism,' Christian Tarsney evaluates one version of this epistemic challenge and comes to a mixed conclusion. On some plausible worldviews, longtermism stands up to the epistemic challenge. On others, longtermism’s status depends on whether we should take certain high-stakes, long-shot gambles.

The Moral Case for Long-Term Thinking
(with Hilary Greaves and William MacAskill)

The Long View, 2021.

HTML / PDF / Open-access book

Abstract:

This chapter makes the case for strong longtermism: the claim that, in many situations, impact on the long-run future is the most important feature of our actions. Our case begins with the observation that an astronomical number of people could exist in the aeons to come. Even on conservative estimates, the expected future population is enormous. We then add a moral claim: all the consequences of our actions matter. In particular, the moral importance of what happens does not depend on when it happens. That pushes us toward strong longtermism.

We then address a few potential concerns, the first of which is that it is impossible to have any sufficiently predictable influence on the course of the long-run future. We argue that this is not true. Some actions can reasonably be expected to improve humanity’s long-term prospects. These include reducing the risk of human extinction, preventing climate change, guiding the development of artificial intelligence, and investing funds for later use. We end by arguing that these actions are more than just extremely effective ways to do good. Since the benefits of longtermist efforts are large and the personal costs are comparatively small, we are morally required to take up these efforts.

Wittgenstein's Tractatus: Now With Examples

2021

HTML

Abstract:

I explain the Tractatus using plenty of examples.

Against Ambition

2019

HTML / PDF

Abstract:

I argue that personal ambition is bad for you.