I spent today at the Effective Altruism Summit in Berkeley, which Leverage is hosting for the second time. Effective Altruism is a movement whose members aim to improve the world as effectively as possible. People* from all over the world flew in to attend this event, which included speakers and attendees from many of the major EA organizations, as well as people who’ve financially supported these organizations, either as primary donors (e.g. Peter Thiel) or by “earning to give.”
To preface this blog post, I want to state that I don’t currently identify as an EA – although I’ve been working at Leverage Research since February, this is one of my first times interacting with people from the broader community, and I have quite a few reservations about it right now.
There are many things I want to say about this topic, but the primary question I want to pose here is this: What is Effective Altruism’s ultimate plan to improve the world?
There were a lot of plans discussed today.
People discussed individual plans for being better EAS:
- By vying for higher-paying jobs in order to more effectively “earn to give.”
- By switching careers so that they can do more impactful work.
- By more effectively choosing a cause to support (with either time or money).
Community leaders discussed organizational plans for improving the world:
- Peter Thiel (cofounder of Founders Fund) explained that his plan for deterministically changing the world is by fostering new technology (in the broad sense of the word).
- William MacAskill (cofounder of Giving What We Can & 80,000 Hours) explained the Center for Effective Altruism’s plan, which is to grow the EA movement.
- Anna Soloman (founder of the Center for Applied Rationality) explained that the hope of spreading rationality is that people will discover cheat codes (an example: the written language) that will allow us to improve the world at a faster rate.
- Eliezer Yudkowsky (cofounder of the Machine Intelligence Research Institute) talked about MIRI’s plan to create friendly artificial intelligence.
- Geoff Anders (founder of Leverage Research) explained the Leverage plan, which involves creating a class of philosopher scientists who vastly improve scientific methodology and psychology research.
But while there is a plan for what individuals can do, and there are plans for what different organizations plan on doing, there isn’t an overarching plan to coordinate all of the EA organizations – a plan that ensures us that what the EA community is doing is enough.
There’s a key assumption that’s being made here that I don’t understand. The assumption I’m referring to is that this haphazard level of coordination between organizations is enough to guarantee that the larger EA movement will be the one to massively improve the world.
What I naively imagined before coming to this conference was that there was some larger or more meta entity that was making sure the members of the EA movement were working together to improve the world effectively enough to guarantee utopia- not just whether we were improving the world more effectively than other people.
What if we live in a world where there are 10 critical problems, and people only step up to solve 9 of them, neglecting the tenth because everyone assumed that someone else was already taking care of it? …this is a disaster situation, and not one that the current community seems to be able to mitigate.
I imagine an extremely clear-thinking entity identifying all of the critical problems, discovering all of the key intervention points, and ensuring that all of them are being taken care of adequately (and if they aren’t, then actively persuading people to switch over to these projects or by running these projects itself.)
Is this necessary? Is it possible? At this point, I’m really not sure.
*White men, primarily. It was funny to see, for one of the first times in my life, the line to the men’s bathroom exceeding the line to the women’s.