Really enjoyed this - reading it again I find myself less in disagreement with it, and it seems subtler and more interesting, which I will arrogantly attribute to my comments on the draft. (EDIT: to be clear, this was an attempted joke.) I'll just leave two thoughts:
1) You're right that, if we are *ever* to make decisions, we somehow have to make trade-offs between seemingly incommensurable ends, and it cannot be a criticism of EA that it demands that people do this. But a better criticism is not 'EA demands that we make trade-offs', but rather 'EA demands that we make trade-offs in a very particular way' - viz., through more-or-less explicit EV calculations.
You argue that there's not going to be much of a difference between this way of making trade-offs and any other reflective approach that is basically defensible. But whether different approaches to trade-offs lead to differences in practice is a contingent empirical question, and I worry you don't go into enough detail to establish your suggestion that we can all get along here.
For instance, in global health and wellbeing, Michael Plant has very explicitly argued that technical philosophical differences matter a huge amount to altruistic resource allocation (https://forum.effectivealtruism.org/posts/JgqEqsa6iAtqGLYmw/the-elephant-in-the-bednet-the-importance-of-philosophy-when-1). I'm not sure whether I agree with Plant in the global health and wellbeing case, but in the animal advocacy space it's generally agreed that these philosophical differences matter a tonne: there have been debates now for decades between welfarists, animal rights activists (and their more extreme cousins the abolitionists), and more minority factions, and it is received wisdom that these philosophical differences have serious practical upshots - comparing the actions of welfarist and abolitionist orgs is a fascinating exercise.
EA has often aggressively stumbled into this debate, in ways that are ignorant of these philosophical differences and not in the least bit offensive; and the best analysis I know of that does conclude 'these differences shouldn't matter to practice' also suggests that EA has been doing animal advocacy wrong this whole time (https://forum.effectivealtruism.org/posts/9qq53Hy4PKYLyDutD/abolitionist-in-the-streets-pragmatist-in-the-sheets-new). So I think you're a little quick to assume away practical upshots from philosophical differences.
It's pretty clear that the people who invented the term 'longtermism' really did mean for it to denote a 'triumphant moral philosophy' influenced by Bostrom's ideas about space colonisation - see Nick Beckstead (https://rucore.libraries.rutgers.edu/rutgers-lib/40469/PDF/1/play/) and MacAskill (https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism). And unlike 'altruism' or 'effectiveness', 'longtermism' is not a common term that the EAs have interpreted in a certain way - it's a technical term that they have essentially defined to mean 'triumphant moral philosophy about the future'. The longtermists and their critics are both in agreement over this definitional fact. So what value do you see in trying to redefine the term 'longtermism' and appropriate it for pluralism, rather than simply giving it up and moving onto other things?
Thank you Peter. (It probably is because of your comments on the draft.)
On 1), I think I agree, but I would say this is sort of tangential to my argument (at least as I see it). I don't think "utilitarianism" has any one particular strong answer to offer here, so in my view this is further evidence to support the claim that 'if we want to do effective altruist things (e.g. protect animals) we have to move away from hoping that utilitarianism-the-moral-theory can give us the answers'. I don't think I am trying to argue that philosophy is irrelevant - I would consider Sen's capability approach a form of philosophical thinking! - but rather that Big Moral Theories can't be expected to do all the work for us. So I accept your point wholesale but I think if it seems like it undermines my argument that's probably because I've expressed my point unclearly.
I think 2) is a great point. To be honest I think it's probably partly just because I think "longtermism" is a really nice phrase that could put non-space-colonisation-derived existential risk research into a good moral light, and it's probably also partly because I think the "EAs are longtermists now" idea is sufficiently widely-held that it might be too late to try and think of a new term that encaptures what Alexander/Shulman et al are trying to get at. Plus I suppose I am a big believer in rhetorical redescription as a strategy, so it seems like a reasonable thing to try.
Ok, yeah, nice. Especially like your view on 2). On 1), I agree with your point that Big Moral Theories can't do all the work for us *even if* we're sympathetic to them, but I don't think that's a consensus position in these spaces; I think that many people have the view that we can fully determine what-is-to-be-done by just plugging some relevant empirical facts into our moral theory. So I don't think you were unclear, I think you just have a view that is not commonly held. (But I think it's the right view, and it's good that you're trying to make it more widely held! So no criticism here.)
Really enjoyed this - reading it again I find myself less in disagreement with it, and it seems subtler and more interesting, which I will arrogantly attribute to my comments on the draft. (EDIT: to be clear, this was an attempted joke.) I'll just leave two thoughts:
1) You're right that, if we are *ever* to make decisions, we somehow have to make trade-offs between seemingly incommensurable ends, and it cannot be a criticism of EA that it demands that people do this. But a better criticism is not 'EA demands that we make trade-offs', but rather 'EA demands that we make trade-offs in a very particular way' - viz., through more-or-less explicit EV calculations.
You argue that there's not going to be much of a difference between this way of making trade-offs and any other reflective approach that is basically defensible. But whether different approaches to trade-offs lead to differences in practice is a contingent empirical question, and I worry you don't go into enough detail to establish your suggestion that we can all get along here.
For instance, in global health and wellbeing, Michael Plant has very explicitly argued that technical philosophical differences matter a huge amount to altruistic resource allocation (https://forum.effectivealtruism.org/posts/JgqEqsa6iAtqGLYmw/the-elephant-in-the-bednet-the-importance-of-philosophy-when-1). I'm not sure whether I agree with Plant in the global health and wellbeing case, but in the animal advocacy space it's generally agreed that these philosophical differences matter a tonne: there have been debates now for decades between welfarists, animal rights activists (and their more extreme cousins the abolitionists), and more minority factions, and it is received wisdom that these philosophical differences have serious practical upshots - comparing the actions of welfarist and abolitionist orgs is a fascinating exercise.
EA has often aggressively stumbled into this debate, in ways that are ignorant of these philosophical differences and not in the least bit offensive; and the best analysis I know of that does conclude 'these differences shouldn't matter to practice' also suggests that EA has been doing animal advocacy wrong this whole time (https://forum.effectivealtruism.org/posts/9qq53Hy4PKYLyDutD/abolitionist-in-the-streets-pragmatist-in-the-sheets-new). So I think you're a little quick to assume away practical upshots from philosophical differences.
2) Your suggestions about 'non-utilitarian longtermism' look a lot like the proposals of EAs who want to *reject* the label of longtermism, such as Carl Shulman (https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/) and Scott Alexander (https://forum.effectivealtruism.org/posts/KDjEogAqWNTdddF9g/long-termism-vs-existential-risk). And likewise, those who care about existential risk but want to make space for more methodological pluralism tend to similarly reject the concept of 'longtermism': see this paper co-authored by my friend Matthijs (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4118618) or this very critical one that's stood up very well (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225).
It's pretty clear that the people who invented the term 'longtermism' really did mean for it to denote a 'triumphant moral philosophy' influenced by Bostrom's ideas about space colonisation - see Nick Beckstead (https://rucore.libraries.rutgers.edu/rutgers-lib/40469/PDF/1/play/) and MacAskill (https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism). And unlike 'altruism' or 'effectiveness', 'longtermism' is not a common term that the EAs have interpreted in a certain way - it's a technical term that they have essentially defined to mean 'triumphant moral philosophy about the future'. The longtermists and their critics are both in agreement over this definitional fact. So what value do you see in trying to redefine the term 'longtermism' and appropriate it for pluralism, rather than simply giving it up and moving onto other things?
Thank you Peter. (It probably is because of your comments on the draft.)
On 1), I think I agree, but I would say this is sort of tangential to my argument (at least as I see it). I don't think "utilitarianism" has any one particular strong answer to offer here, so in my view this is further evidence to support the claim that 'if we want to do effective altruist things (e.g. protect animals) we have to move away from hoping that utilitarianism-the-moral-theory can give us the answers'. I don't think I am trying to argue that philosophy is irrelevant - I would consider Sen's capability approach a form of philosophical thinking! - but rather that Big Moral Theories can't be expected to do all the work for us. So I accept your point wholesale but I think if it seems like it undermines my argument that's probably because I've expressed my point unclearly.
I think 2) is a great point. To be honest I think it's probably partly just because I think "longtermism" is a really nice phrase that could put non-space-colonisation-derived existential risk research into a good moral light, and it's probably also partly because I think the "EAs are longtermists now" idea is sufficiently widely-held that it might be too late to try and think of a new term that encaptures what Alexander/Shulman et al are trying to get at. Plus I suppose I am a big believer in rhetorical redescription as a strategy, so it seems like a reasonable thing to try.
Ok, yeah, nice. Especially like your view on 2). On 1), I agree with your point that Big Moral Theories can't do all the work for us *even if* we're sympathetic to them, but I don't think that's a consensus position in these spaces; I think that many people have the view that we can fully determine what-is-to-be-done by just plugging some relevant empirical facts into our moral theory. So I don't think you were unclear, I think you just have a view that is not commonly held. (But I think it's the right view, and it's good that you're trying to make it more widely held! So no criticism here.)