Discussion about this post

User's avatar
Peter McLaughlin's avatar

I like this. You'll know that I'm really interested in arguments that start from value-pluralist non-consequentialism, but try to stake out the arena within which some form of maximise-y Bayesian-y consequentialism is nonetheless a useful and reasonable model for decision-making; that's what you're doing, and I think this attempt is super cool. I am, however, not sure it fully works.

I think it's too weak in some ways. Your model seems to be, basically, that GiveWell exists to fulfil the demands of a pre-existing group of charitable donors. Your notes hint at more complicated spillover effects - GiveWell's mere existence also pressures more charities to release helpful data about their impact, and also renders cause-neutral impact more legible to 'retail donors' - but I think you overlook a major phenomenon here: you guys have turned lots of people onto the very idea of charitable giving that is driven by cause-neutral impact. Coming across the concept leads people - not everyone, but not no-one - to do a Hank Scorpio "why didn't I think of that?" and change their charitable giving habits. There are different ways to interpret this, but I take it to suggest that cause-neutral donation is a good thing more people should be doing, but that people generally had not been exposed to the idea, or thought it through in depth, before ~15-20 years ago.

But your explanation of GiveWell's particular niche basically proceeds as follows: _if_ you want to do cause-neutral donation, _then_ you should do maximise-y reasoning (or pay attention to a charity evaluator who does it for you). You don't suggest any reasons for accepting the 'if', beyond the general reasons that giving to charity is good: if giving 10% to AMF does it for you, great, but equally you can go run a 10k for Cancer Research. In the first place, I think this is normatively too weak: as a matter of ethics, I think yousins' work is more important than that. But more relevantly, it's descriptively too weak: the model can't explain a lot of the appeal of GiveWell, which comes not from serving people with a pre-existing commitment, but by turning people onto the idea, which they think is good for reasons that go beyond general 'charity is good, bowling alone is bad'.

But I also think your model is too strong. I actually don't think that most people are aiming for some version of impact when they give to charity. In part I do actually think the classic proto-EA Sequences points here are important (https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity), but I think there's also a different way to think about this. Your post hints at the idea, though it doesn't actually come out and say it, that it would in some sense be disrespectful to your friends if you were to suggest that they didn't want to 'see their scarce resources put to best use', for some meaning of the word 'best'. I agree with the thought that ordinary charitable activity often does deserve our respect, even if we ourselves are giving to GiveWell top charities; but I disagree that respecting the activity means understanding it in terms of impact.

If some version of impact-driven, maximise-y thinking were simply a precondition for rationality, then absolutely, yes, to say that someone is not aiming for impact would just be to say that they were being irrational; dogmatic EAs are willing to bite that bullet and be disrespectful, you are less willing, but you both agree on the premise. I don't agree on the premise (and frankly I think any version of value pluralism worth the name rejects the premise, though that is a different argument).

And, as Daniel Ellsberg wisely observed (https://www.jstor.org/stable/1884324), the question here is kind of testable. There are types of behaviour that cannot be understood in terms of maximise-y impact thinking _no matter how 'impact' is defined_; there are other types of behaviour that, if we wanted to think of them as driven by impact, we'd have to define 'impact' in an implausible and gerrymandered way. And Ellsberg says that we can stand back from our ordinary frameworks of economic rationality, look at these types of behaviour, and rely on non-circular types of reasoning to ask, 'does this seem reasonable and respectable?' If we ever answer 'yes', then your assumption doesn't hold in that context.

I think that many types of ordinary charitable behaviour, when looked at through this lens, aren't maximise-y at all. Nonetheless, while some aspects of this behaviour are both non-maximise-y and also not really worth our respect (as in the type of errors in the above-linked LessWrong post), others are non-maximise-y but nonetheless entirely worthy of respect.

You might descriptively disagree with me here, and assert that actually, no, ordinary charitable endeavours really can be plausibly understood as driven by impact. Or you might normatively disagree with Ellsberg, and say that there really is no way to understand rational economic decision-making except in terms of most effectively allocating scarce means to chosen ends, so trying to evaluate that framework in terms of our judgments about which decisions are rational would be circular. Those are both reasonable and widely-held views. But my point is just that you are relying on this assumption about economic rationality. And it is exactly this assumption that I want to attack, and that I think lots of people who dislike GiveWell want to attack; so your model ends up assuming too much, and proving too much, to do what it set out to do.

Expand full comment

No posts