Discussion about this post

User's avatar
Shaked Koplewitz's avatar

There's some good points here, but two important errors:

- the AIDS example is kinda bad. Most EA global health initiatives are for malaria or deworming, which aren't easily avoided by just making smart decisions.

- the investment argument is also kinda shaky. Partly because it's unclear donated money displaces investment (it can also displace trivial marginal consumption), and partly because most investments aren't comparable to the steam engine (the median invested dollar is fairly useless). And otoh there's an argument that in terms of net economic growth investing in removing plagues and global health hazards is actually pretty good, it's just not solved by markets since the value created is hard to capture. But in the long term "a world where you can go to Africa without worrying about parasites and malaria" is better for me even for selfish reasons (both for me and because international companies can get more value out of it).

Expand full comment
Noah Birnbaum's avatar

I really don't think any of these cases are near equivalent but lmk why you think I'm wrong. '

1) In the first case, Chett is being extremely irresponsible, and I think incentivizing this behavior is making our intuitions go astray. This case is also different because it's not actually death but risk of death (I understand that one can deal with probabilities and EV but that's not what the initial case is). In regards to the AIDS case, these are not at all similar. First, GiveWell (the place where most EAs give charity towards global health and development) does not recommend charities for AIDS, and there is no notion of responsibility in the charities on GiveWell's list (https://www.givewell.org/charities/top-charities). Second, the extents of responsibility here seem very far off -- there is a very low chance you get AIDS from sex and Chett is literally just making a dumb decision because he wants money. Also, it seems like you're calling for people in Africa just not to have sex?

2) This case of also seems misguided. Singer's case doesn't tell you to risk your life. I also think that the fact that he wouldn't save you is not be a good reason to not save someone.

3) On the third point, Watt didn't know how much good the steam engine would do and this would largely be a question about uncertainty. Do you not think it makes sense to sacrifice people now for a lot more people in the future? I'm not even sure I understand where your divergent moral intuitions are coming from.

4) I agree that there is a hard task of deciding to make more EAs or do charity, but the answer is certainly not gonna be only one as the smart move is to hedge against multiple cause areas that all have some probability of resulting in great consequences.

Also, unlike your description states, you did not show why Singer's conclusion do not follow from utilitarian premises. You claimed (I think with some errors) to show that utilitarianism doesn't do a good job of accounting for all the moral facts (i.e. giving dessert to bad people, universalizability, the counter-intuitiveness of longtermism, etc). While I think all these critiques of utilitarianism are worth taking very seriously, I'm not sure you showed what you attempted to show.

Expand full comment
24 more comments...

No posts