Note: the below represents my own views, not those of GiveWell, my employer.
For just more than a year, I’ve worked on the research team at GiveWell, an organization that aims to find and fund the most cost-effective charitable opportunities we can. By ‘cost-effective’, we mean that we want to have the ‘most impact’ per dollar of funding that we direct.
At a basic level, I think this is a pretty intuitive goal. Some problems are bigger than others. Of those bigger problems, some could very plausibly be addressed by philanthropy, others less so. (Take wars, for example: what matters there usually isn’t money, but politics.) And of the problems that can be readily addressed by charitable giving, you’d naturally expect some organizations or programs to be better or worse at solving them than others. Finding the right problems and the right programs is a large part of what GiveWell does.
The intuitive case for impact isn’t unorthodox, I don’t think. Every well-intentioned non-profit wants to see their scarce resources put to best use.1 I know a number of people who have worked in local British charity shops, a thousand miles from GiveWell’s world of randomized controlled trials and Bayesian inference, who are profoundly devoted to impact as they understand that term. The kind of impact that GiveWell talks about might be different, but the basic idea that doing more good in the world is better than doing less is not. To that end, the fact that people at GiveWell are extremely passionate about making the most of our resources might be the most conventional thing there is about us.
But GiveWell does think about impact differently in some respects. How so? Personally, I think what makes GiveWell distinctive is that it thinks about impact:
Comparatively;
In terms of a specific range of outcomes, not outputs.
I also think it’s these two things, not valuing ‘impact’ in itself, that sometimes causes misconceptions about the function of ‘cost-effectiveness’ in global philanthropy. (Again, I’m speaking for purely myself here.) Given this, it seems worth thinking about how this all might fit together.
Historical context might help with (1). GiveWell was created by two everyday donors who wanted to figure out where to give. They were broadly cause neutral: that is, they wanted to spend their money well, but weren’t predisposed towards any one area of giving over any other. As a result, GiveWell’s very first list of ‘cause areas’ ranged from averting mortality in Africa to helping disadvantaged adults in New York City.2 Without an intrinsic reason to prefer one issue to the other, the comparative ‘price of impact’, or cost-effectiveness, becomes one obvious way to decide.
Of course, cause neutrality isn’t metaphysically ‘correct’, and nor is it natural to every donor. A couple of my friends, for instance, have run marathons and chosen to fundraise for organizations that research cures for diseases like cancer, since relatives of theirs have suffered from them, and they want to try to help those who may find themselves in a similar position in future. In other words, they are intrinsically motivated to support this issue, specifically, regardless of the relative cost of impact.
Even though my friends aren’t cause neutral, so aren’t setting themselves up to think about cost-effectiveness like I do, I still think their behavior is deeply admirable. Two of my grandparents died of cancer long before I was born; a third had it while I was young. When my friends give to organizations like this, I don’t for a moment think of them as any less motivated by ‘impact’ than I am. I think of them as having understandable, deeply personal reasons to be attached to a particular cause, which informs where they want to make an impact.3
GiveWell exists to serve a different audience: those particular donors who, for whatever reason, aren’t thinking about the question of ‘where to give?’ with a predisposition towards a cause in mind. To serve donors, in other words, who attach equal importance to a life saved abroad as at home, or to a life saved from diarrheal disease as from cancer.
It turns out that serving the cause-neutral donor is a complex task.
For one thing, the organizations that (like GiveWell4) evaluate a wide range of cause areas tend to be huge NGOs or governments, which have the budgets to tackle problems in science, health, and development all at once. Organizations that more typically receive funding from individual donors like you or I usually don’t work this way. Instead, they tend to focus on building out deep subject-area expertise within a particular field or intervention, in order to be able to maximize their impact within that specific domain.
But then someone in my role comes along, and has to ask organizations like this a strange and an awkward question: ‘your work is evidently impactful — but how impactful, exactly?’. It’s a strange question, because there’s absolutely no doubt that these cause-area specialists are impact-driven. And it’s awkward, because these experts are, nevertheless, not generally used to thinking about whether they’re working in the absolute ‘most impactful’ line of work: quite understandably, they’ve never had to adopt GiveWell’s comparative, cross-causal perspective themselves.
But if your organization’s grantmaking covers the breadth of global health, this awkward question is a necessary one. To take one of my own areas of work as an example: we know that the chlorination of drinking water reduces child mortality. But we still have to think comparatively. We could direct funding to a chlorination program, or we could direct it to other cause areas that we also know are extremely important, like maternal and neonatal health, malaria prevention, or nutrition. In serving donors who have no intrinsic preference between any one of these causes, understanding the exact effect of each program in comparative terms does really matter. Even if it means asking people strange and awkward questions.
And there is another, more fundamental challenge with the comparative approach. To compare impact across causes or programs, you have to select metrics to compare impact on. Which brings me to (2) — the fact GiveWell assigns value to outcomes, not to programmatic outputs.
In everyday usage, what we mean by ‘impact’ tends to be highly contextual, or program-specific. We might talk about the impact of a school feeding program by saying that it delivered a hundred school meals to young children, or about a malaria program by saying it delivered a hundred insecticidal nets. When we talk about the impact of a program in these terms, we are referring to program outputs, specific to a given intervention.
But we’re in the comparison game, and it’s hard to compare these outputs to one another: I might now know whether a program has delivered the service it intended to, but how many additional school meals is the delivery of an insecticidal net worth, or vice-versa? To answer that question, you need a way of consistently and fairly comparing the effect of those outputs, across contexts: in other words, you need to ascribe value to specific outcomes.
What are the outcomes you should care about? Sadly, ‘goodness’, in the abstract, just won’t do. ‘Goodness’, after all, is an amorphous and even philosophically fraught concept: we’re looking to achieve greater clarity about our comparative criteria, not less.
Given this, we need to make a subjective decision about what the most important characteristics of a ‘good’ program might be. That means identifying proxies, or indicators, of goodness: selecting a range of definable, measurable outcomes that we think capture the bulk of what people (both donors and recipients) generally have in mind when they talk about impact. (In 2019, GiveWell recommended a grant for a survey of program beneficiaries, to understand how they thought about this topic.)
There is an unavoidable trade-off here: between fidelity to the infinitely complex subjective moral question of what ‘good’ truly is, and the ability to make consistent, legible comparisons between the impact of many different programs, given finite time and resources. If GiveWell was primarily engaged in practicing moral philosophy, we might lean towards the former: we might attempt to use as many proxies for goodness as possible, in order to capture as much complexity and nuance as we can.
But GiveWell isn’t in the business of moral philosophy. It’s in the business of recommending grants that accord to the broad moral intuitions of our donors and recipients. So, instead, we translate the outputs of a program into three outcomes:
Mortality (deaths) averted;5
Morbidity (suffering) averted;
Increases in consumption.
Programs that score well on these measures are programs that we think are saving or improving lives — which, I dare suggest, we can consider to be uncontroversially good outcomes. (To quote the philosopher Bernard Williams, slightly out of context: if you’re questioning whether saving or improving lives is good, you’re having “one thought too many”.)
Uncontroversially good outcomes — but certainly not representative of all that is good. At least in my view, GiveWell doesn’t need to, and isn’t trying to, capture some cosmic, metaphysical ‘truth’ about the ‘real content’ of goodness, or do away with the plurality of different values in the real world. By focusing on the three outcomes above,6 GiveWell is instead trying to capture enough information to understand whether our grants are leading to more inarguably good outcomes than they would if we made different ones — which is what an impact-focused funder ought to do.
There is clearly a trade-off here: in focusing on this range of indicators, we’re not focusing on other things — procedural justice, say, or aesthetic beauty — that might well make up a reasonable person’s definition of ‘good’, and which I think (in distinction to at least some effective altruists!) are worthy targets of philanthropic spending in themselves, but which aren’t so suitable for consistent, quantifiable comparisons of impact between causes.7 The latter is the specific function that GiveWell serves. But this isn’t the only legitimate approach to philanthropy there is, and supporting GiveWell’s programs doesn’t preclude you from finding intrinsic value in these other things too.
Before I worked at GiveWell, I studied political theory. I think the broader philosophical considerations that lurk behind the term ‘impact’ are fascinating. But I’m comfortable with the thought that GiveWell exists to fulfill a specific, almost non-philosophical need: to help those donors who share the view that we should focus on the most consistent indicators of doing good we have, who want to do as much good by those particular measures as they can, and who, as discussed above, do not begin from a position of particular preference over the specific cause area in which they do so.
GiveWell, then, does not try to, or claim to, fulfill every possible philanthropic purpose, to reveal some inarticulable ‘essence’ of impact, to be philosophically satisfying, or even to ‘care more’ about impact than other non-profits. It isn’t here to tell you what you should value, or to tell you what impact ‘really is’. Our research doesn’t imply a negative judgement about the cause areas that don’t fit within our framework, from animal welfare to the Royal Ballet. (In our personal lives, plenty of GiveWell staff donate to a wide variety of other programs.)
GiveWell aims to do something much simpler: to make grants that we think are likely to save or improve more lives than would have been saved or improved without us. (And to that end, we make our thinking public, so that people can check our work, and tell us where we’re wrong.)
I think it’s important to spell out the specificity of this goal. I think of the steady decline in charitable giving as a hugely regrettable cultural trend, an indicator of an apparent tendency towards more introverted social lives. To me, at least, it reflects the steady depletion of an essential kind of communal disposition; a glum symptom of bowling alone. And this means that I view anyone engaged in charitable giving as doing something both admirable and all-too-uncommon, whether GiveWell’s specific approach to impact appeals to them or not.
Indeed, it’s to be expected that not every donor will find what GiveWell offers to be suited to their priorities. We appeal to those who, like me, begin from that specific question: ‘how can I save or improve the most lives with the money that I have to contribute?’. To those, who, like me, view the non-profit sector as paved with both good intentions and plenty of pitfalls, and who, recognizing this, want to feel confident that the ultimate destination of their donation has been the subject of plenty of thought, love and care by a team of people who have made thinking about this their life’s work.
That ought to be extremely appealing for many people, and perhaps I should write more about why it appeals so deeply to me. But it won’t be right for everyone. Such is the nature of living in a value-plural world.
This is, after all, the modern view of economy: Lionel Robbins defined economics as the study of “human behaviour as a relationship between ends and scarce means which have alternative uses’ in 1932. Max Weber had earlier called economy ‘the careful choice between ends; albeit oriented to the scarcity of means that appear to be available’ in Economy and Society.
See ‘The Case for the Clear Fund’, p. 9. (Note that the use of ‘cause areas’ to narrow the scope of GiveWell’s research in its first year reflected its limited resources, not a philosophical rejection of other problems.)
Incidentally, that business case gets to something I still find especially valuable about GiveWell. It’s hard to overstate how difficult it used to be to have any real sense of what an organization was actually accomplishing, beyond the occasional (highly unreliable) indicator of an ‘overheads to program spending’ ratio. I think that you could think of GiveWell’s growth as slowly helping align the incentives of the charitable sector in favour of greater transparency, which matters a lot – whether you adopt a cause-neutral approach to giving or not.
I also suspect that it is very difficult for retail donors to compare the expected cost-effectiveness of a donation to different organizations in this space, which I think is even more reason not to think that my friends are less interested in impact than I am. The task of figuring out where the expected value of a donation would be highest is really difficult — which is why GiveWell came into existence in the first place.
To be specific, GiveWell currently has grantmaking teams dedicated to identifying funding opportunities in malaria, vaccines, nutrition, water, livelihoods, and new areas. (I personally split my time between water and livelihoods.)
Note that within the category of mortality averted, we place a higher weight on averting the deaths of younger children. For more on GiveWell’s moral weights, see here.
We do also think extensively about qualitative considerations, externalities, and drawbacks when we do this, but I’ll skip over that for simplicity’s sake.
We are interested in thinking about whether there are other outcomes that we should explicitly value in our grantmaking in the future, but I’ll leave that aside for now.
I like this. You'll know that I'm really interested in arguments that start from value-pluralist non-consequentialism, but try to stake out the arena within which some form of maximise-y Bayesian-y consequentialism is nonetheless a useful and reasonable model for decision-making; that's what you're doing, and I think this attempt is super cool. I am, however, not sure it fully works.
I think it's too weak in some ways. Your model seems to be, basically, that GiveWell exists to fulfil the demands of a pre-existing group of charitable donors. Your notes hint at more complicated spillover effects - GiveWell's mere existence also pressures more charities to release helpful data about their impact, and also renders cause-neutral impact more legible to 'retail donors' - but I think you overlook a major phenomenon here: you guys have turned lots of people onto the very idea of charitable giving that is driven by cause-neutral impact. Coming across the concept leads people - not everyone, but not no-one - to do a Hank Scorpio "why didn't I think of that?" and change their charitable giving habits. There are different ways to interpret this, but I take it to suggest that cause-neutral donation is a good thing more people should be doing, but that people generally had not been exposed to the idea, or thought it through in depth, before ~15-20 years ago.
But your explanation of GiveWell's particular niche basically proceeds as follows: _if_ you want to do cause-neutral donation, _then_ you should do maximise-y reasoning (or pay attention to a charity evaluator who does it for you). You don't suggest any reasons for accepting the 'if', beyond the general reasons that giving to charity is good: if giving 10% to AMF does it for you, great, but equally you can go run a 10k for Cancer Research. In the first place, I think this is normatively too weak: as a matter of ethics, I think yousins' work is more important than that. But more relevantly, it's descriptively too weak: the model can't explain a lot of the appeal of GiveWell, which comes not from serving people with a pre-existing commitment, but by turning people onto the idea, which they think is good for reasons that go beyond general 'charity is good, bowling alone is bad'.
But I also think your model is too strong. I actually don't think that most people are aiming for some version of impact when they give to charity. In part I do actually think the classic proto-EA Sequences points here are important (https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity), but I think there's also a different way to think about this. Your post hints at the idea, though it doesn't actually come out and say it, that it would in some sense be disrespectful to your friends if you were to suggest that they didn't want to 'see their scarce resources put to best use', for some meaning of the word 'best'. I agree with the thought that ordinary charitable activity often does deserve our respect, even if we ourselves are giving to GiveWell top charities; but I disagree that respecting the activity means understanding it in terms of impact.
If some version of impact-driven, maximise-y thinking were simply a precondition for rationality, then absolutely, yes, to say that someone is not aiming for impact would just be to say that they were being irrational; dogmatic EAs are willing to bite that bullet and be disrespectful, you are less willing, but you both agree on the premise. I don't agree on the premise (and frankly I think any version of value pluralism worth the name rejects the premise, though that is a different argument).
And, as Daniel Ellsberg wisely observed (https://www.jstor.org/stable/1884324), the question here is kind of testable. There are types of behaviour that cannot be understood in terms of maximise-y impact thinking _no matter how 'impact' is defined_; there are other types of behaviour that, if we wanted to think of them as driven by impact, we'd have to define 'impact' in an implausible and gerrymandered way. And Ellsberg says that we can stand back from our ordinary frameworks of economic rationality, look at these types of behaviour, and rely on non-circular types of reasoning to ask, 'does this seem reasonable and respectable?' If we ever answer 'yes', then your assumption doesn't hold in that context.
I think that many types of ordinary charitable behaviour, when looked at through this lens, aren't maximise-y at all. Nonetheless, while some aspects of this behaviour are both non-maximise-y and also not really worth our respect (as in the type of errors in the above-linked LessWrong post), others are non-maximise-y but nonetheless entirely worthy of respect.
You might descriptively disagree with me here, and assert that actually, no, ordinary charitable endeavours really can be plausibly understood as driven by impact. Or you might normatively disagree with Ellsberg, and say that there really is no way to understand rational economic decision-making except in terms of most effectively allocating scarce means to chosen ends, so trying to evaluate that framework in terms of our judgments about which decisions are rational would be circular. Those are both reasonable and widely-held views. But my point is just that you are relying on this assumption about economic rationality. And it is exactly this assumption that I want to attack, and that I think lots of people who dislike GiveWell want to attack; so your model ends up assuming too much, and proving too much, to do what it set out to do.