Before we get onto the topic of this post — the case for greater moral pluralism within effective altruism — we ought to work out whether this really needs to be raised at all. Is effective altruism today all that distinctively committed to a particular moral theory? In other words, is it (currently) a utilitarian community?
Oddly, the answer to this question depends on who you ask, and how you ask it.
If you ask most people, they will say yes. When most people think of effective altruism, they think not of all the demonstrable good effective altruists have done, but of Sam Bankman-Fried. And of the many distinctive things about the worldview of Sam Bankman-Fried, one very obvious, widely-known thing, aside from the most obvious, widely-known thing, is that he believes in a full-on, no-holds-barred, zero caveat, bullet-biting, proudly philosophical utilitarianism:
And though it’s hard to prove this, I reckon that this is what most people think effective altruism necessarily involves in general. That is, the public perception seems to be that you can’t be an effective altruist unless you’re capable of staring the repugnant conclusion in the face and sticking to your guns, like Will MacAskill does in his tremendously widely-publicised and thoughtfully-reviewed book. (Indeed a lot of criticism levelled at effective altruism, as I will discuss in this post, operates under this assumption, self-consciously or not.)
Yet if you asked most prominent effective altruists this question, you would receive the opposite answer. For instance, MacAskill’s “The definition of effective altruism” contains this:
And Benjamin Todd, who co-founded the excellent 80,000 Hours career advisory with MacAskill and wrote up the above paper here, says:
So that’s what you’d hear if you were to ask community leaders explicitly. Yet if you were more interested in reading between the lines, you would be forgiven for thinking the answer was quite different. Effective altruists’ revealed preferences seem much more in line with the public’s perception.
Effective altruists adopt utilitarianism at much higher rates than the public at large, and prominent effective altruists seem especially committed to this moral vision. In practice, therefore, the community operates with a default, background assumption of strong utilitarian thinking; even notionally acclaimed attempts to challenge this are, in the end, very much an outlier view. Besides that, the public’s perception of effective altruism is not based on nothing: it’s based on the works and ideas of the movement’s most famous figures, and in substance these works are all what we might call ‘Very Clearly Utilitarian’. (I thank Peter McLaughlin for driving this point home to me, among other valuable feedback.)
That is not to say that MacAskill’s ‘not-just-utilitarianism’ definition of effective altruism is misconceived. (Indeed in some ways this post is just an attempt to expand upon that idea.) But it is to say that Tyler Cowen was quite right to have described effective altruism as an offshoot of utilitarianism in some important respects, in spite of messaging that suggests otherwise.
In particular, Cowen’s view is that there are:
two ways in which effective altruism to me seems really quite similar to classical utilitarianism. The first is simply a notion of the great power of philosophy, the notion that philosophy can be a dominant guide in guiding one's decisions[, whilst the second] is a strong emphasis on impartiality.
Moreover, for him this inheritance is a mixed blessing. As he puts it:
At current margins, I'm fully on board with what you might call the EA algorithm. At the same time, I don't accept it as a fully triumphant philosophic principle that can be applied quite generally across the board, or as we economists would say, inframarginally.
Interestingly, what Cowen says here is surprisingly consistent with — albeit much more sympathetic in tone than — a lot of far more overt criticism of effective altruism as of late. As I discuss in this post, a perpetual theme of this body of work is that effective altruism’s deeds (i.e., doing charity effectively) are good; the ‘triumphant philosophic principle’ that comes with it (in the form of an all-explanatory utilitarianism) is, for one reason or another, a serious limitation.
Of course, a contented triumphant utilitarian might naturally respond to this that we simply can’t have one without the other. But: is that really true?
For effective altruists, that is certainly worth asking — whether critics’ distaste for this form of utilitarianism resonates or not. The utilitarian moral theory that presently encloses effective altruist thinking is not only especially strongly associated with the people who have brought it into disrepute, it is also highly particular: as I try to show in what follows, there are plenty of people who could be amenable to effective altruism but balk at it because they do not share what they (understandably) believe to be its necessary moral theory. Therefore, for big-tent seeking effective altruists, triumphant utilitarianism incurs a cost.
So, with this in mind, in this post I attempt to do two things.
In the first two sections, I try to show that effective altruism and moral philosophy do not naturally run together. Though the two seem linked, effective altruists are up to something substantively and categorically different to moral theorists, utilitarian or otherwise. This means that the justification of their actions need not depend on utilitarianism at all.
This argument would, however, mean abandoning any attempt to turn effective altruism into an all-explanatory moral theory. Is it worth paying the price?
In the final two parts of this essay, I try to argue so. Not only can utilitarianism be fully separated out from effective altruism, it ought to be. The pay-off would not just be improved public relations. It would open up effective altruism to a genuine methodological pluralism. And for those interested in movement-building, or maximising the amount of charitable work effective altruism gets up to, that methodological pluralism would not only be an intrinsic good: it would make effective altruism an altogether more natural home for many of those who currently believe themselves to be better-suited to a life outside it, too.
In some respects, this is hardly new. The view that there are good reasons for effective altruism to distance itself from utilitarianism is clearly tacitly taken already, or it wouldn’t have made sense for MacAskill and others to make the claims discussed above. Nonetheless, splitting the two apart in practice would require unpicking many more of the ties that demonstrably still exist.
In other words, this is inevitably an annoyingly lengthy post, but do bear with me.
I. The division of labour between effective altruists and philosophers
To begin prising apart effective altruism and moral philosophy, I want to return to a rather thorough critique of effective altruism that Amia Srinivasan wrote, seven-and-a-bit years ago, for the London Review of Books. Which ends as follows:
As it happens, I think this criticism is question-begging. Srinivasan has decided in advance that we need ‘an alternative vision of how things could be’ — a vision of systemic reform — and criticises the logic of effective altruism for its failure to reach the same verdict. She doesn’t really want to debate whether that vision is necessary. For her, it’s a given. So effective altruism is bound to fail.
But that says something in itself. Effective altruists have a very different idea of what ‘positive change’ looks like compared to many philosophers and political theorists, like Srinivasan. Incremental action with relatively certain positive consequences has a higher ‘expected value’, in effective altruism’s terms, than the kind of systemic change that Srinivasan has in mind, which is almost unavoidably altogether more morally uncertain. (Srinivasan may think we need to dismantle capitalism, for instance, but some of us believe global capitalism is good; this kind of question ultimately turns on our deeper values, which are too subjective to be ranked as ‘better’ or ‘worse’ with any degree of confidence — or humility.)
To me (though I am not the first to say this), this disagreement about the nature of positive change reflects the fact that effective altruists and philosophers are up to two very different things. Effective altruists are primarily concerned with how best to use time and money for charitable ends: they seek to guide resource allocations in the here-and-now. Philosophers, by contrast, spend their time thinking about value-judgements in much more abstract ways, and, importantly, often over much longer time-frames (leaving aside longtermism for now). There is a clear division of labour between the two.
The difference in timescale matters. The longer you look into the future, the more you will need to confront the irresolvable nature of value conflict. At that scale, there are a vast range of different possible outcomes, and a similarly vast range of theoretical frameworks to consider in deciding which to pursue. For the effective altruist, all this can do is weigh down an expected value calculation: how can we decide whether something delivers value if we face so many choices about what ‘value’ even looks like? Hence effective altruism struggles to have all that much to say about truly big picture philosophical questions — just as Srinivasan argued.
But that’s fine. Insofar as effective altruism is, fundamentally, a way of helping people work out how best to use their resources to do good, it simply doesn’t need to worry about any of these long-term philosophical quandaries.
In the short term, it is perhaps surprisingly easy to avoid having to quibble in big philosophical terms about what constitutes a ‘good deed’. In practice, most moral systems converge on some basic commitments at this time scale, such as that things like ‘generosity’, ‘helping the poor’ and ‘effectiveness’ are all straightforwardly, incontrovertibly good. And such moral commitments are the only ones that effective altruists need in order to start working out how to allocate resources towards charitable ends in the manner that they do today.
Perhaps the easiest way to demonstrate this is to show that even effective altruism’s most implacable and embittered critics don’t actually disagree with effective altruists’ action-guiding recommendations — which implies they really are derived from uncontroversial foundations. For instance, part-way through her LRB critique, Srinivasan reveals she is in fact entirely on board with effective altruism’s view of what might be a ‘good’ way to tackle poverty, here and now:
Or for another example: Jimmy Lenman, in an otherwise scathing, anti-utilitarian attack piece, remains full of praise for what effective altruists actually do:
And somewhat amusingly, even Émile Torres, a strong contender for effective altruism’s worst-faith critic (albeit in a hotly-contested field), still did this:
It is, it seems, really quite hard to find issue with the way effective altruists go about practicing their side of the moral division of labour — resource allocation. Everyone with their head screwed on thinks that donating money to charity in effective ways is good — whether they are deontologists, virtue ethicists, or goodness-knows-whatever-else. And I think most would also recognise that the charitable sector could really, really do with some greater critical thinking.
The objections begin to creep in, instead, when these philosopher-critics start to fear that effective altruists are encroaching on their territory (when they fear they are attempting philosophy), which effective altruism, they imply, cannot provide the analytical tools for, since those tools were built for a very different enterprise.
Now we might plausibly bicker about whether these critics draw on fair examples to make their argument about the paucity of effective altruism as a philosophy, or whether they have really understood the goals and work of effective altruists correctly before attacking them. (Among various other conceivable objections.) But I think a much neater proposal might be this: to accept the point entirely. Effective altruism isn’t, or at least shouldn’t be, a big picture, ‘triumphant philosophic principle’. And nor, then, is it a cognate of utilitarianism.
II. A truly non-utilitarian effective altruism
This would not cause so many problems as one might think. Granted, the perception that effective altruism derives from triumphant utilitarianism is rather strong. Granted, lots of prominent effective altruists are in point of fact utilitarians. And granted, there is some — apparent — methodological overlap.
But we have already looked at how the action-guiding role of effective altruism is basically uncontroversial. It is uncontroversial because the belief that effective altruism can ‘do good’ is not contingent on holding any particular philosophical framework. Non-utilitarians can (and do) support what effective altruists do.
And in practice there is a relatively clear distinction between the two concepts.
Utilitarianism is a philosophical doctrine of the ‘big picture’, engaging-with-value-conflict type. The utilitarian claim is that the various values that divide people are actually commensurable with one another, via the higher value of ‘utility’. In other words, it doesn’t matter if some cherish truth where others value beauty: we can resolve these tensions by working out where the utility lies beneath. Now, obviously, utilitarians rely on utilitarianism to think about the decisions they make too, so it is action-guiding as well as ‘big picture’ thinking — but what makes ‘utilitarianism’ as a term mean anything (at least in a normative, triumphant-principle sense) is that utilitarians use its logic to make distinctive conclusions, which other moral frameworks would not advise.
Effective altruists, by contrast, need not think that values are commensurable, nor that utility is in any way an ultimate arbiter of philosophical life. None of this is necessary for their overriding objective of putting resources to good use. If resource allocation was dependent on such ethical positions, the deeds and aims of (classical) effective altruists would be morally controversial in a way they simply are not. (Again, leaving aside longtermism for now.)
In fact, we can go further: effective altruism presupposes that you cannot make values commensurate with one another, as utilitarianism suggests. That is why effective altruism discounts systemic change so heavily in its expected value calculations, to the point of conflict with Srinivasan — its hostility to large-scale philosophical conclusions is, implicitly, predicated on the very inescapability of moral conflict. By contrast, a committed utilitarian ought to be much more confident that it is possible to judge what moral ‘improvement’ looks like.
Another way to put it is this: utilitarianism is a worldview; effective altruism is a theory of resource allocation, which in practice operates under precisely the assumption that one ought not place much confidence in any given worldview.
This might seem to put quite a bit of distance between my conception of effective altruism and that of someone like MacAskill, who is clearly keen to leverage it into a form of utilitarian moral theory. Yet I think it fits quite neatly with the response Todd gave to Srinivasan’s argument at the time:
Of course, there is an obvious objection to my argument. Effective altruism looks like what one imagines ‘doing utilitarianism’ looks like. Effective altruists weigh up various competing factors, work out what is most ‘good’ (a single, utility-like metric), and do that. Often they put numbers on subjective values too.
But, again, consider the implications of the above. If I am right that effective altruism is a decision mechanism for allocating charitable resources, then these ostensibly ‘utilitarian’ features are in fact not distinctively utilitarian at all. Whenever we consciously and deliberatively make complex decisions, we have to weigh up competing, non-commensurable factors, and somehow combine them into a single verdict. For instance, if you have two job offers — one that pays £30k, say, and one £40k — and the worse-paying job seems more fun than the better-paying one, you are forced to trade-off two values (remuneration and enjoyment) that are not easily or objectively tallied, and come to a single aggregated decision about which is best on net. If you do so, are you a utilitarian? No! Not unless you believe ‘utilitarianism is when you decide things’.
What is unique about effective altruists is not that they are ‘more utilitarian’ in how they think about decisions, but rather the extent to which they think about these decisions, and proactively regulate themselves in order to make the most rational choices in the circumstances possible. In other words, others who have been interested in improving the lives of those in need have often not, historically speaking, thought in the same level of detail — or with the same focus on succeeding in their stated aims — as effective altruists endeavour to. (For a really impressive recent example of this, see this animal welfare charity’s self-evaluation.) That does not mean those other charitable givers were less utilitarian.
Now you might instead argue that the utilitarianism creeps in at the objective-setting level, rather than the resource-allocation level: ‘improving the lives of those in need’, an objective of effective altruism, might be deemed a ‘more utilitarian’ goal than, for example, ‘improving the quality of opera’, as per arts philanthropists. Yet, once again, I think it would be absurd to claim that only utilitarianism offers the tools to allow someone to decide which of ‘opera’ and ‘those in need’ is more important. (Ask Amartya Sen, famed non-utilitarian, what he thinks.) That is, utilitarianism might be why, as a point of historical fact, effective altruists fixate on impact, but it needn’t be why in theory.
The non-necessity of utilitarianism is why effective altruists’ actual charitable endeavours are so uncontroversial, even by Lenman or Srinivasan’s lights. Moreover, it reminds us what is truly valuable and special about effective altruism as a whole. Here is Dylan Matthews on Open Philanthropy, from 2015:
That is, what has always made effective altruism valuable is how self-consciously it thinks through its objectives and how to meet them. Not how utilitarian it is.
That said, there is a downside to this. If you accept a non-utilitarian account of effective altruism, you ought to relinquish the view that effective altruism can or should be a form of triumphant moral philosophy, a way of explaining the world’s problems, or a way of adumbrating a future utopia. To find any of those things you will need to believe something else on top.
Equally, you are now free to articulate more or less whatever moral philosophy you like. And if your primary interest is in making an impact now, does the loss of an all-explanatory form of effective altruism really bear on you all that much anyway? Perhaps a non-utiliarian effective altruism could even be quite freeing.
III. The freedom of non-utilitarianism
Why does all of this matter? Well, I reckon this account of effective altruism would give the movement much broader appeal, whilst (as I hope I have shown) fully retaining what it is about effective altruism that makes it so valuable today.
There are a few reasons for this.
First, a less overtly triumphant-principle effective altruism would be far less susceptible to criticism, which seems to be a drag on the movement in various ways, not least in wasting effective altruists’ time. To my mind it is far better to get on with doing good in relative quiet than attract the public attention (and the inevitable, often ill-informed backlash) that comes with broad philosophical ambitions. Narrower efforts to ‘do philanthropy well’ without trying to make large claims in the arena of moral philosophy — think the Gates Foundation — do not get the kind of backlash that effective altruism gets.
Second, this version of effective altruism could be much more explicitly pluralistic in its approach to doing good, which would be good. An excellent recent forum post by Ryan Briggs argues that Sen’s capability approach ought to serve as an important complement to other ways of thinking about wellbeing. Of course effective altruists should be thinking this broadly about how to conceptualise improvements in human welfare — but if you hold that effective altruism is ‘utilitarianism applied’, you will find yourself predisposed against the value of exactly this kind of thing.
Pluralism matters for more reasons than you might think. Lots of people, as McMahan has written, criticise effective altruism because they agree with Bernard Williams’s criticisms of utilitarianism, and they think those criticisms straightforwardly apply to effective altruism. But as someone who broadly shares Williams’s criticisms of utilitarianism myself, it is very important to me that effective altruists are able to show why they aren’t subject to the same claims. It should be clear there is ample space for non-utilitarians in the community, which can only be true if we do not think of effective altruism as applied utilitarianism.
Finally, and most speculatively, this view of effective altruism might help us think about a case for longtermism that doesn’t rely on population ethics, or arguments involving colonising space, and so on. So I will finish by briefly discussing this.
IV. The longtermist turn
Throughout this post I have tried to talk about classic effective altruist activity — that is, global health and wellbeing efforts — in isolation, deferring discussion of longtermism until now. And that is because I think longtermism complicates this story, in some potentially open-ended ways.
It strikes me as clearly true that the longtermist turn is effective altruism’s most controversial intellectual development. It is also, it seems to me, most obviously ‘the clear next step for effective altruism’ for those who view effective altruism in the triumphant-moral-theory sense I have critiqued here. (Eric Schliesser’s multi-part criticism of MacAskill is, for my money, the most interesting stuff that has been written about any of this, if you’re keen.)
So on one hand there are understandable reasons for the widespread inclination to reject longtermism. If longtermism only follows from effective altruism’s pretensions to moral philosophy, then maybe it is one extension too far. Maybe its troubled reception is yet another sign that the attempt to elevate effective altruism into a moral theory is simply doomed.
But in my view longtermism doesn’t only follow from triumphant utilitarianism. That is, you can think effective altruism is doomed to fail as a moral theory, and yet still readily believe that the longtermist turn is a valuable complement to effective altruism’s traditional areas of interest.
In part, that is because the existential risks that longtermist charitable activity has so far concerned itself with are not actually very far away or removed at all. Experts think artificial general intelligence is coming within the next 40 years. Prediction markets think it is coming within sixteen. (Markets are less bullish.)
And artificial general intelligence is obviously a risk. We can argue about how much of a risk it is, and whether Eliezer Yudkowsky is more doomster than booster or whatever, but it would be obviously mad to ignore it completely.
As a recent tweet put it:
So perhaps there are two reasons why artificial intelligence risk research is controversial. One is the understandable concern that it might simply be the product of the imaginations of people with indisputably ‘out there’ views about population ethics or space colonisation (not to mention far worse), and, as the critics of these people point out, if this kind of approach comes at the cost of traditional global wellbeing efforts, then that is really very bad indeed.
But as that tweet suggests, perhaps this research is also controversial because people have simply overlooked how significant artificial intelligence actually is. Its consequences are going to be very big, and they are difficult to predict. So although it is perfectly defensible to be deterred by longtermism’s philosophical origins, to write off longtermists’ actual charitable work — their acts, not their philosophy, to use the distinction I have tried to draw across this essay — would be to make a major error indeed.
In other words, just as the non-utilitarian justification of effective altruism is that it seeks to correct for mistakes and irrationality in international development, there is a non-philosophical justification of longtermism, and it’s that it corrects for mistakes and irrationality in how most people perceive the course and scope of technological change. This applies not just to artificial intelligence, but also to longtermism’s other areas of focus, like bioterrorism, pandemics, or nuclear war.
And this justification, it ought to be added, should be similarly uncontroversial. It should be obvious to people of all philosophical inclinations that it is bad if we all die, and good if we try to assess and mitigate the risks of that happening. (Indeed, this easily passes our pre-existing cost-benefit analyses.) Just as it is uncontroversial that GiveDirectly or the Against Malaria Foundation do good.
But that is not what people think longtermism is about, mainly because that is not where longtermism comes from, nor how its most famous adherents justify it. (Perhaps some longtermists’ relatively heterodox methodologies play a part, too.) And with all that in mind, who can blame those new to longtermism for looking at it, believing it to be a form of highly particular moral theory, and coming away with at least as many reservations with it as Srinivasan had of classical effective altruism more than seven years ago?
If anything, the advent of longtermism is perhaps the very clearest reason to abandon the strong utilitarian account of effective altruism as a whole. It is here that, more than anywhere else, effective altruists’ pretensions to moral philosophy have most strongly put off outsiders. This has led many otherwise persuadable people to overlook what could, and should, be a very widely-appealing rationale for longtermist charitable work. And if effective altruists want to bring existential risks to greater public attention, then that is quite the error indeed.
Update: this post has received a few interesting and constructive comments on the EA forum, which I would recommend reading if you are interested in digging into this further.
Really enjoyed this - reading it again I find myself less in disagreement with it, and it seems subtler and more interesting, which I will arrogantly attribute to my comments on the draft. (EDIT: to be clear, this was an attempted joke.) I'll just leave two thoughts:
1) You're right that, if we are *ever* to make decisions, we somehow have to make trade-offs between seemingly incommensurable ends, and it cannot be a criticism of EA that it demands that people do this. But a better criticism is not 'EA demands that we make trade-offs', but rather 'EA demands that we make trade-offs in a very particular way' - viz., through more-or-less explicit EV calculations.
You argue that there's not going to be much of a difference between this way of making trade-offs and any other reflective approach that is basically defensible. But whether different approaches to trade-offs lead to differences in practice is a contingent empirical question, and I worry you don't go into enough detail to establish your suggestion that we can all get along here.
For instance, in global health and wellbeing, Michael Plant has very explicitly argued that technical philosophical differences matter a huge amount to altruistic resource allocation (https://forum.effectivealtruism.org/posts/JgqEqsa6iAtqGLYmw/the-elephant-in-the-bednet-the-importance-of-philosophy-when-1). I'm not sure whether I agree with Plant in the global health and wellbeing case, but in the animal advocacy space it's generally agreed that these philosophical differences matter a tonne: there have been debates now for decades between welfarists, animal rights activists (and their more extreme cousins the abolitionists), and more minority factions, and it is received wisdom that these philosophical differences have serious practical upshots - comparing the actions of welfarist and abolitionist orgs is a fascinating exercise.
EA has often aggressively stumbled into this debate, in ways that are ignorant of these philosophical differences and not in the least bit offensive; and the best analysis I know of that does conclude 'these differences shouldn't matter to practice' also suggests that EA has been doing animal advocacy wrong this whole time (https://forum.effectivealtruism.org/posts/9qq53Hy4PKYLyDutD/abolitionist-in-the-streets-pragmatist-in-the-sheets-new). So I think you're a little quick to assume away practical upshots from philosophical differences.
2) Your suggestions about 'non-utilitarian longtermism' look a lot like the proposals of EAs who want to *reject* the label of longtermism, such as Carl Shulman (https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/) and Scott Alexander (https://forum.effectivealtruism.org/posts/KDjEogAqWNTdddF9g/long-termism-vs-existential-risk). And likewise, those who care about existential risk but want to make space for more methodological pluralism tend to similarly reject the concept of 'longtermism': see this paper co-authored by my friend Matthijs (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4118618) or this very critical one that's stood up very well (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3995225).
It's pretty clear that the people who invented the term 'longtermism' really did mean for it to denote a 'triumphant moral philosophy' influenced by Bostrom's ideas about space colonisation - see Nick Beckstead (https://rucore.libraries.rutgers.edu/rutgers-lib/40469/PDF/1/play/) and MacAskill (https://forum.effectivealtruism.org/posts/qZyshHCNkjs3TvSem/longtermism). And unlike 'altruism' or 'effectiveness', 'longtermism' is not a common term that the EAs have interpreted in a certain way - it's a technical term that they have essentially defined to mean 'triumphant moral philosophy about the future'. The longtermists and their critics are both in agreement over this definitional fact. So what value do you see in trying to redefine the term 'longtermism' and appropriate it for pluralism, rather than simply giving it up and moving onto other things?