26 Comments

There's some good points here, but two important errors:

- the AIDS example is kinda bad. Most EA global health initiatives are for malaria or deworming, which aren't easily avoided by just making smart decisions.

- the investment argument is also kinda shaky. Partly because it's unclear donated money displaces investment (it can also displace trivial marginal consumption), and partly because most investments aren't comparable to the steam engine (the median invested dollar is fairly useless). And otoh there's an argument that in terms of net economic growth investing in removing plagues and global health hazards is actually pretty good, it's just not solved by markets since the value created is hard to capture. But in the long term "a world where you can go to Africa without worrying about parasites and malaria" is better for me even for selfish reasons (both for me and because international companies can get more value out of it).

Expand full comment
author

The median invested dollar is far from useless. The median invested dollar yields a return of 7% per year on overage. So $1000 invested for 70 years would yield about $1M. Hard to see how purchasing bed notes could compete with that over the long run.

Expand full comment

Given the rates of nonconsensual and dubiously consensual sex in areas where HIV is common, is it accurate to claim that people there willingly risk infection in order to have sex?

Expand full comment
author

Sexual assault is clearly a different matter. However, bailing out women who had “dubiously consensual” sex is certainly very low on the list of moral priorities given how many people are suffering due to situations which they had no control over.

Expand full comment

I really don't think any of these cases are near equivalent but lmk why you think I'm wrong. '

1) In the first case, Chett is being extremely irresponsible, and I think incentivizing this behavior is making our intuitions go astray. This case is also different because it's not actually death but risk of death (I understand that one can deal with probabilities and EV but that's not what the initial case is). In regards to the AIDS case, these are not at all similar. First, GiveWell (the place where most EAs give charity towards global health and development) does not recommend charities for AIDS, and there is no notion of responsibility in the charities on GiveWell's list (https://www.givewell.org/charities/top-charities). Second, the extents of responsibility here seem very far off -- there is a very low chance you get AIDS from sex and Chett is literally just making a dumb decision because he wants money. Also, it seems like you're calling for people in Africa just not to have sex?

2) This case of also seems misguided. Singer's case doesn't tell you to risk your life. I also think that the fact that he wouldn't save you is not be a good reason to not save someone.

3) On the third point, Watt didn't know how much good the steam engine would do and this would largely be a question about uncertainty. Do you not think it makes sense to sacrifice people now for a lot more people in the future? I'm not even sure I understand where your divergent moral intuitions are coming from.

4) I agree that there is a hard task of deciding to make more EAs or do charity, but the answer is certainly not gonna be only one as the smart move is to hedge against multiple cause areas that all have some probability of resulting in great consequences.

Also, unlike your description states, you did not show why Singer's conclusion do not follow from utilitarian premises. You claimed (I think with some errors) to show that utilitarianism doesn't do a good job of accounting for all the moral facts (i.e. giving dessert to bad people, universalizability, the counter-intuitiveness of longtermism, etc). While I think all these critiques of utilitarianism are worth taking very seriously, I'm not sure you showed what you attempted to show.

Expand full comment
author

Thanks for the thoughtful response!

1) I think that unprotected sex in a region where AIDS is prevalent is reckless and irresponsible in a way that makes it similar to Chett's activities. Sorry to sound like a Christian fundamentalist but people in Africa can avoid AIDS by remaining virgins until marriage and marrying another virgin.

2) I think that most people think that non-reciprocity is a good reason to not help someone when there is a significant cost to you. The cost could be monetary (resources) or risk.

3) Watt could have been certain that INVESTMENT (building the steam engine) would yield more value over the long run than CONSUMPTION (feeding the orphans). My point is that even if you accept Utilitarian premises, donating to charity is misguided because the normal work of civilization, i.e. inventing things, having families, growing the economy etc. will yield more value in the long run.

4) It is extremely implausible that any effort to help third-world people could have a long term impact comparable to making more EAs.

In this post I don't challenge Utilitarian premises. My point is that even when you accept Utilitarian premises, in concrete examples, the thing which actually maximizes long-run utility is the normal thing you would have done anyway (which happens also to be the thing which you would have done under alternative moral theories like virtue ethics or Rights Theory).

The thought experiments where different moral theories are alleged to conflict actually break down when you look at them carefully.

You should let Steven drown not only because he's a bad person getting what he deserves, but also because letting a non-cooperator like Steven drown is the effective Utilitarian choice when saving Steven involves any substantial cost or risk to yourself.

Expand full comment

Sure you can make the case that helping someone becomes less morally salient if they took a big risk and brought something upon themselves. I am also open to argue on whether the specific top EA cases are good ones. But a world where we completely internalize the heuristic "X took risk, suffered consequences, and thus it is not moral to help them" is not a world I long for. Almost all actions have both upside and downside risk, and rarely the former without the latter. It is hard for me to see how anyone at all is worthy of our help with this approach. I am not so sure the "few well-raised high-IQ, highly altruistic children" will be so altruistic after all if we hammer them with this heuristic.

Expand full comment
author

I think that charity should begin at home. You can build a little tribe which is insulated from the outer world. I'm not calling for the end of charitable feelings, but the effective way to respond to those feelings is to channel them into a self-sustaining community, not to throw them away on the ungrateful masses.

Expand full comment

+1

It's not just the utilitarian effect of an action, but how that action affects us (and those around us) as people. If it makes us more or less likely to act well in the future that is part of the merit of the action.

-----------

When the West was industrializing it was really obvious that market forces were making peoples lives better. Becoming some industrial magnate meant building things with obvious physically positive effects on peoples lives.

Today, succeeding in the market kind of feels like a scam. There are some people that do things that make the world a better place, but a lot of us feel like we have "Bullshit Jobs". Red Queen Race or outright destructive. If you do a bullshit job all day "give up your daily latte and save a life in Africa" has some appeal.

And honestly if the third world had the IQs of the first some of these investments really would have the same positive impacts that they had 100 years ago in the west.

The other thing is that the poor in the first world at at this point mostly the deserving poor. Maybe some the poor in the third world are undeserving too, but the very fact that they are so remote makes that hard to see. You can see the poor in America act bad, they aren't showing this on PBS often:

https://www.youtube.com/watch?v=dwnx05OFtO0

Expand full comment
Jun 14Liked by Simon Laird

You say “We wish that we had the resources to save everyone.  But we don’t have sufficient resources”. Is it not so much that we (collectively) don’t have the resources, but that we don’t know how to apply the resources that we do have effectively? In other words, the second and third order (and so on) consequences of any intervention are ultimately unpredictable.

Expand full comment
author

By “we” I mean Pro-Civilization people who care about morality. We are currently a small fraction of the population and we have very little institutional power. So we need to focus our efforts on taking over the Regime.

If we were in charge of major countries, we actually probably could do a lot to improve the third world at minimal cost - we just can’t afford to try that now since we should focus on gaining power.

Expand full comment
Jun 21Liked by Simon Laird

Yeah, Singer’s thing has always been disingenuous. One of the assumptions underlying our impulse to save the drowning kid is that there’s someone else in the world (eg, the kid’s parents) who would save the kid if they were here, but they’re not. If there were a kid who literally nobody else on the world cared about — in fact not one kid but many thousands every year, to play out Singer’s metaphor — would you really feel morally obligated to save them all, and then presumably feed and clothe and house them all?

In other words, if I encountered a drowning kid once, I’d ruin my expensive suit and save him. But by the 10th or 100th or 1000th drowning kid (let’s be honest, maybe by the 3rd) that nobody else gives a shit about, I’d start developing a very callous attitude toward drowning kids, the same way I have a callous attitude toward insects, or frankly, toward factory farmed animals. We develop whatever defence mechanisms we need to get through the day. When a moral philosophy asks too much of it, we reject it and get another.

Expand full comment
author

I don't agree that philosophy is a "defense mechanism" but I agree that the fact that there are thousands of drowning children undermines the duty to help.

Expand full comment
Jun 17Liked by Simon Laird

How far away does a drowning child have to be before letting them die won't 'brutalize my heart'? Lets say I know for sure the drowning is happening. Can they be on the other side of a wall? Is my heart safely unbrutalized if they're a block away? Is the criterion that if I need to use a vehicle to reach them in time, then I don't have to worry about it?

Expand full comment
author

It is a matter of degree. So yes, if they are several blocks away or on the other side of a high wall, that makes it less salient.

Expand full comment
Jun 18Liked by Simon Laird

I appreciate you biting the bullet, but it's a very weird moral stance you've ended up with. Particularly at the end of your essay, where you say that we should be solemn and serious about the deaths happening on the other side of world, but also deliberately not do anything even if we could.

I think you could use your reasoning to *also* advocate for giving only what you can afford to charity, which is what most EAs do to my knowledge, and is explicity recommended by at least one big org, Giving What We Can. This allows you to help with the problems that you yourself acknowledge and feel bad about, while also leaving enough to the donator that they can start a business, invent new tech, and start a family. This approach wins just on Occam's razor for me, because I don't need to write an essay of questionable analogies to explain why the intuition to help shouldn't be followed. I just care about something and then help.

Expand full comment
author

The problem is that Giving What We Can's position is incoherent. If you choose to have a family of your own, you're already leaving many far away children to drown.

If investing in your own business/family is justified because its rate of utilitarian return is higher than that of charity, then it makes sense to invest all of your resources in your own business/family.

Expand full comment
Jun 18Liked by Simon Laird

1) Having a diverse portfolio, so to speak, of different routes to making the world better, doesnt seem that incoherent to me? Maybe that business or child could make the world massively better, but you can hedge against that with good ol bednets and vaccinations

2) It seems to me that utilitarians do a lot of work to make it so their stated positions don't endorse crazy things like ruining your own life for the sake of buying anti-malaria drugs. I'm pretty sure that without these arguments, almost all utilitarians *still* wouldn't ruin their life, out of desire for a non-ruined life. This seems fine to me? And it should be good news for you, as it proves you can care a lot about helping people in Africa, without also going insane

3) I'm kinda tacking this on as an extra point rather than rebutting anything, but have you read 'Wholehearted choices and "morality as taxes"' by Joe Carlsmith? It's a pretty short blog post, the point that sticks for me is how I actually really truly *want* to save the drowning child; to the point where talking about how far away it is just seems like missing the point. So too with the whirlpool jerk and the biker, I don't care about what they deserve, I just don't want them to die. (Which means for me, pulling the guy out of the water, but NOT giving the biker gas money, because it would incentivize his reckless behaviour)

Expand full comment
Jun 15Liked by Simon Laird

The problem with this article is it starts in the middle of the argument. Without having a grounding for morality, you cannot get back to first principles.

Expand full comment
author

Moral reasoning should not be based on first principles.

You don't need a philosophical theory to know that it's morally wrong to torture people to death for fun.

Expand full comment

Moral reasoning that does not start with first principles is so much sound and fury. We can look back and see cultures that believe all sorts of things. We can’t just say ‘well, they shouldn’t need…”

Expand full comment
author

I actually don’t believe that past cultures had wildly different moral beliefs… maybe a topic for a future essay.

Expand full comment

I wouldn’t have put it as a question of past vs present, but certainly different cultures tend to ground their beliefs in different areas. The very idea of a ‘martyr’, for example, makes no sense in an atheist worldview. Nor does ‘morality’, for that matter, but that’s a longer argument :)

(And one I have had. See my ‘Pizza’ series.)

Expand full comment

This kind of balderdaah is who both EA and consequentialism generally are nonsense. It's an impossible calculus most of the time and leads to stupid ideas like reparations. Virtue ethics is superior, but also insufficient, but that's a different story.

Expand full comment

JFC it should be illegal to be this stupid. Only good thing here is that it seems highly unlikely a little bitch like the OP will be breeding any time soon so that’s something. Ps. Not every stupid idea that comes into your mind needs to be vomited into a page. Delete your account.

Expand full comment

I’m intrigued, tell me more.

Expand full comment