3 Comments
Jan 28, 2023·edited Jan 28, 2023Liked by Keir Bradwell

Really enjoyed this - reading it again I find myself less in disagreement with it, and it seems subtler and more interesting, which I will arrogantly attribute to my comments on the draft. (EDIT: to be clear, this was an attempted joke.) I'll just leave two thoughts:

1) You're right that, if we are *ever* to make decisions, we somehow have to make trade-offs between seemingly incommensurable ends, and it cannot be a criticism of EA that it demands that people do this. But a better criticism is not 'EA demands that we make trade-offs', but rather 'EA demands that we make trade-offs in a very particular way' - viz., through more-or-less explicit EV calculations.

You argue that there's not going to be much of a difference between this way of making trade-offs and any other reflective approach that is basically defensible. But whether different approaches to trade-offs lead to differences in practice is a contingent empirical question, and I worry you don't go into enough detail to establish your suggestion that we can all get along here.

For instance, in global health and wellbeing, Michael Plant has very explicitly argued that technical philosophical differences matter a huge amount to altruistic resource allocation (https://forum.effectivealtruism.org/posts/JgqEqsa6iAtqGLYmw/the-elephant-in-the-bednet-the-importance-of-philosophy-when-1). I'm not sure whether I agree with Plant in the global health and wellbeing case, but in the animal advocacy space it's generally agreed that these philosophical differences matter a tonne: there have been debates now for decades between welfarists, animal rights activists (and their more extreme cousins the abolitionists), and more minority factions, and it is received wisdom that these philosophical differences have serious practical upshots - comparing the actions of welfarist and abolitionist orgs is a fascinating exercise.

EA has often aggressively stumbled into this debate, in ways that are ignorant of these philosophical differences and not in the least bit offensive; and the best analysis I know of that does conclude 'these differences shouldn't matter to practice' also suggests that EA has been doing animal advocacy wrong this whole time (https://forum.effectivealtruism.org/posts/9qq53Hy4PKYLyDutD/abolitionist-in-the-streets-pragmatist-in-the-sheets-new). So I think you're a little quick to assume away practical upshots from philosophical differences.

2) Your suggestions about 'non-utilitarian longtermism' look a lot like the proposals of EAs who want to *reject* the label of longtermism, such as Carl Shulman (https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/) and Scott Alexander (https://forum.effectivealtruism.org/posts/KDjEogAqWNTdddF9g/long-termism-vs-existential-risk). And likewise, those who care about existential risk but want to make space for more methodological pluralism tend to similarly reject the concept of 'longtermism': see this paper co-authored by my friend Matthijs (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4118618) or this very critical one that's stood up very well (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225).

It's pretty clear that the people who invented the term 'longtermism' really did mean for it to denote a 'triumphant moral philosophy' influenced by Bostrom's ideas about space colonisation - see Nick Beckstead (https://rucore.libraries.rutgers.edu/rutgers-lib/40469/PDF/1/play/) and MacAskill (https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism). And unlike 'altruism' or 'effectiveness', 'longtermism' is not a common term that the EAs have interpreted in a certain way - it's a technical term that they have essentially defined to mean 'triumphant moral philosophy about the future'. The longtermists and their critics are both in agreement over this definitional fact. So what value do you see in trying to redefine the term 'longtermism' and appropriate it for pluralism, rather than simply giving it up and moving onto other things?

Expand full comment