The Effective Altruism Fig Leaf Never Truly Covered the Shame Underneath
by Neil H. Buchanan
If you are a smart, ambitious young person who wants to live a comfortable life, you can never go wrong by making it your business to tell rich people what they want to hear. And what they want to hear is that they are gracious, wonderful paragons of virtue who deserve everyone's admiration and who should keep doing what they did to make all of that beautiful money.
I am fairly certain that the first time I saw a version of that statement was in an article written by the all-time great political economy professor John Kenneth Galbraith (also known for his critiques of "the conventional wisdom"). A short online search did not turn up a pithy quote, and it might not in fact have been from Galbraith; but in any event, the observation is plainly true. One way to see this is by recalling the horrified response from the super-wealthy when Barack Obama hurt their feelings by saying that their financial activities might need to be regulated more effectively -- which Paul Krugman dubbed "the pathos of the plutocrats."
The most recent example of this phenomenon -- hyperrich people paying other people money to tell them what they love to hear -- is called "effective altruism" (EA), and it is a doozy. To be clear, I am not saying that everyone who has written about EA or has supported some of its conclusions is on the take or a fool. As I will discuss, there is a core of supportable ideas at the base of the EA morass, and it would be a shame if EA's exposure as a front for what we might call Muskism causes people to reject those good things. But in any event, EA is another example of an intellectual movement that pulls people in by dangling obvious truths or a clever insight (or two) as bait and then going badly astray.
Here, I will briefly explain EA, which will be rather easy, given that Professor Dorf did such a good job of laying out its basics in a critical/skeptical column on the subject yesterday. I will then add to the discussion by pointing out that the next step in the EA logic -- purporting to maximize human happiness over the space of quadrillions of years -- is all but designed to greenwash today's billionaires' ill-gotten loot.
I will leave it to interested readers to click on Professor Dorf's column for many of the details of EA and its critiques, which will allow me to focus on the weird path along which the theory carries its believers. The short version, however, is that EA starts with the completely uncontroversial observation that a person who wants to be altruistic ought to do so in the most effective way possible -- hence the name of the theory, of course. The hook: "I want to do the most good possible with my money." Who could argue with that?
The problem, of course, is that it is impossible to agree on what counts as the "good" that one wants to do the most of. Professor Dorf points out that the EA universe is (or strongly seems to be) based on utilitarianism, as it essentially says that there is a way to decide what counts as a desirable outcome and then to compare how different efforts to achieve that outcome compare in costs and benefits. One of the most obvious examples is that giving money to buy mosquito nets to prevent malaria will save more lives than giving money to cure a disease that kills a handful of people per year. See? We want to be effective! And of course that means that any charitable cause that does not (appear to) save lives would be off the table, leaving universities, museums, summer camps for at-risk kids, and even medical research for non-lethal diseases and conditions out in the cold.
That seems wrong, but EA makes it all sound so, so logical, which is unsurprising, because there is more than a passing resemblance between EA and Ayn Rand's objectivist libertarianism. The latter continues to suck in smart (and some middling) high school boys (yes, almost exclusively boys), many of whom -- former House Speaker Paul Ryan, former Fed Chair Alan Greenspan, and more -- never grow out of their infatuation with the claim that simple logic compels a certain conclusion as an objective matter. Subjective opinions are so messy.
In turn, this carries over to orthodox neoclassical economics, which is the most successful academic/policy application to date of mindless libertarian theory. It is no mere coincidence that one of the first orders of business in almost every Econ 101 class and textbook -- most often the very first thing on Day One -- is to explain the difference between positive and normative economics. The usual pose is: "Hey, I'm a scientist, man. I deal with is, not should, which I leave to the philosophers and politicians." Of course, they in no way leave it to the philosophers or the politicians (whom they openly disrespect in any event), but they hide their normative views behind an edifice of obscurantism, typically in the form of mathematics of an advanced enough form to convince its practitioners that they are smarter than everyone else.
One result of this, as Professor Dorf and I explained in a 2021 article in Cornell Law Review, is that the standard economic theory behind things like the so-called Law & Economics Movement is billed as judgment-free objective analysis. In fact, however, economic analysis is based on nothing but selective choices about what parts of the legal/policy system should (there is that word that they claim is outside of their ambit -- should) be changed while others should be taken as an unquestioned baseline.
The necessary result of the infinite number of choices of baselines is that anything and everything can be justified as efficient, and anything and everything can be rejected as inefficient, with the difference being merely the choice to exalt one baseline over another. Even if people do so unconsciously, any baseline that they choose has normative content and is not in any way objectively true or scientifically inevitable.
As Professor Dorf and I discussed in our paper, all economic analyses (which are usually based on a mathematical specification of a utility function) can be justified as utility-maximizing, so long as one specifies what counts and does not count toward utility in reaching that conclusion. Even so, in his column yesterday, Professor Dorf wrote: "I'm not a utilitarian." Based on what I wrote above, that statement is not only false but cannot possibly be true, because everything that everyone could ever claim to be a deviation from utilitarianism can be recharacterized as the maximization of a differently specified utility function. I am not a utilitarian either, but both of us say so knowing that there is a tautological version of utilitarianism that would insist that we are mistaken. That version is, however, trivial -- as tautologies so often are.
Interestingly, the response of people who insist on calling themselves utilitarians often takes the form of a two-step: (1) concede that there is no baseline and that everything is contingent, but then (2) act as if this somehow does not mean that their analysis is trivial. This, as we wrote in our 2021 piece, strongly resembles how "originalists" claim via "original public meaning" and other moves that they are not old-fashioned intentionalists, but then they rely on claims that, say, the people who wrote the 14th Amendment could not possibly have intended it to mean that same-sex marriage is a constitutional right.
Actually, there is then a third step, which is to eyeroll and sneer when they are called on their two-step. This is similar to economists (including Krugman, by the way) saying, "We've already gone over this!" when someone like me reminds them that there is no such thing as a Nobel Prize in Economics. "Our fraudulence is old news. Why do you keep bringing it up, over and over and over again, just because we keep doing the same old thing ourselves?" The economists with whom I have spoken who are aware of the tautological nature of the efficiency concept invariably say something like this: "Yes, but everyone has known that for years. We have to move on." Yet they do not move on.
But back to EA. At this point, it should be clear that the completely open-ended definition of "effective" makes this merely a specific case of the general incoherence of the efficiency concept. Thus, as yesterday's column explains, one common move is to switch to "rule-utilitarian" analysis, which simply posits that the law must be followed. Even there, however, the inevitable question is whether the maximizer is permitted (or even required) to change the law via the legalized bribery of political contributions, along with funding thinktanks and scholars to justify changes in the law that would make the donor even richer.
As Professor Dorf notes, the billionaire-stroking (my term, not his) version of EA that has emerged is the "earning to give" (ETG) strategy, which essentially says that each person can provide the highest service to humans by making as much money as possible. After all, the more money one makes, the more mosquito nets one can donate.
Except that it never makes sense to donate the money! After all, the longer one can hold onto the money and invest it in other profit-seeking activities, the more money one will be able to give to prevent even more deaths in the future. The future never arrives, however, because giving the money away in the here and now forsakes future enlargement of what could (but never will) be donated.
How many generations into the future should we go? Should each person give it all away on their deathbed? Of course not, because they should leave their fortune intact and pass it (untaxed, natch) to their presumptively talented and deserving heirs, who will make it grow ever larger, leaving it in turn to their heirs in perpetuity.
Many readers of Dorf on Law are aware that I have been working on a series of articles and a never-ready book project investigating intergenerational justice. Even so, until last month's spectacular collapse of Sam Bankman-Fried's house of crypto-cards, I had barely taken note of the EA movement, even though (as I have now discovered) one of that movement's prominent strands includes what they call longtermism, based in large part on a philosophical tract called What We Owe the Future. Given that my project is called "What Do We Owe Future Generations?" one might imagine that I would have come across their work.
Once I familiarized myself with the ETG/longtermist story that Bankman-Fried had been pushing, however, I realized that I had in fact run across the basic arguments without having remembered the branding. And I did not take it seriously because all of this merely adds another dimension of tautological boundlessness to the already boundless notion of efficiency.
I cannot track down the article in which I read it, but one version of a longtermist argument is very Muskian indeed. (Musk and others, of course, have been big supporters of this self-justifying theory.) [Update: here it is, a long piece by Alexander Zaitchik in The New Republic; see also this other interesting New Republic piece. I have added the quotes below from Zaitchik's piece, updating my text as appropriate.] The most extreme version of the idea (if one can call it that) is that at some point human consciousness might be transferable into something other than the water-and-meat bags that we currently inhabit. As Zaitchik summarizes the absurdity of it all:
By the last chapter of What We Owe the Future, the reader has learned that the future is an endless expanse of expected value and must be protected at all costs; that nuclear war and climate change are bad (but probably not “existential-risk” bad); and that economic growth must be fueled until it gains enough speed to take us beyond distant stars, where it will ultimately merge, along with fleshy humanity itself, into the Singularity that is our cosmic destiny.
At that point, in a post-Matrix-like future, I suppose that humans will be able to live forever in a state that will seem real to them and that can bring them as much happiness as they can possibly achieve. Moreover, because those conscious beings will cost almost nothing to support, it should presumably be possible to "birth" not just billions of humanoid lives but quadrillions or quintillions. And even without taking it to that extreme, Zaitchik notes correctly that the beings who will exist in the far-distant future "will possess only a remote ancestral relationship to homo sapiens." What is the moral calculus for weighing their potential interests against ours?
In any case, with those stakes, a believer in longtermism would have to say: Who cares if a few hundred million people today have to continue to suffer, when an unimaginably larger number of future humans can be brought into being as a result? And the only way to do that, conveniently, is to allow tech bros and billionaires to continue to make as much money as possible, to be used to develop the means to create this brave new world.
The philosopher Derek Parfit's foundational book Reasons and Persons is in part devoted to exploring the "non-identity problem." This concept explores the difference between a person who exists (that is, whom we can identify) and a person who could exist (whom we cannot). An uncountably small fraction of people who could exist will exist, because everything that we do changes who might exist. Something as simple as, for example, an hour's delay in a sexual tryst that results in pregnancy will change the specific sperm cell that penetrates that egg, resulting in a different person being born. We will never meet the potential person who would have existed were it not for the football game on TV running long.
One of the most important ethical implications of the non-identity problem is that currently-alive people arguably have claims on our moral obligations that potential people cannot. A useful thought experiment asks whether we should choose to use up all of Earth's resources in the process of giving today's living people long and healthy lives full of happiness and dignity, knowing that doing so would prevent all future people from ever existing. (The hypo involves a social agreement not to create any new people along the way.)
That is a profound ethical dilemma, far too complicated to even pretend to resolve here. I bring it up, however, because the EA/longtermism argument presumes that the future people who might exist if only we could make it technically possible to bring them into existence and sustain them (possibly on Mars, if Musk's silly plans were to become reality, although even that is well short of the post-body world that could sustain quintillions of beings) are collectively so important that we should allow the visionaries to make their billions by any means necessary, including by immiserating people living today and making the planet uninhabitable -- so long as the money could be used to make life sustainable otherwise.
As Galbraith said, the extremely wealthy will always want to reward people who provide justifications for them to do what they wanted to do all along, with a big bonus for anyone who says that doing so makes the rich people not merely justified but positively saintly. Think of the gazillions of future lives that they will bring into existence and then make better!
But again, they could do that, but the time will never arrive when the money should be spent to do so.
I have no doubt that there are many intelligent, well-meaning people who see the positive aspects of EA and who supported Bankman-Fried and others for admirable reasons. And as I noted above, there is no reason to say that any particular academic who has contributed to the field did so for nefarious reasons. After all, rich people are very good at finding people who are already saying things that can be manipulated to their ends. And as I noted at the beginning of this discussion, there is plenty of attractive content within EA to make it appealing to people who mean well.
In the end, however, the recent concern that EA will somehow be tainted by Bankman-Fried's corruption misses the point. EA's core good ideas are simply good ideas that can guide our decisions without the baggage of EA, ETG, or longtermism. Rejecting orthodox economics does not require us to say that waste is good. Similarly, if EA and its offspring were to be completely discredited, that would not force us to reject the idea that we ought to try to do the most good at the least cost -- so long as we show ample humility about the fact that "most good" and "least cost" are contestable, nonobjective inquiries.