How Sam Bankman-Fried put effective altruism on the defensive
NEW YORK — Disgraced crypto entrepreneur Sam Bankman-Fried wanted people to know that the enormous risks he took were in the service of humanity; that, at least, was the impression he tried to give on Nov 30 in an interview at The New York Times’ DealBook Summit.

NEW YORK — Disgraced crypto entrepreneur Sam Bankman-Fried wanted people to know that the enormous risks he took were in the service of humanity; that, at least, was the impression he tried to give on Nov 30 in an interview at The New York Times’ DealBook Summit.
The famously jittery Bankman-Fried, who was arrested Monday (Dec 12) night on fraud charges, looked relatively subdued as he Zoomed in for the interview from a dimly lit room in the Bahamas, a tax haven whose regulatory environment was particularly suited to his crypto ambitions.
“Look, there are a lot of things that I think have really a massive impact on the world,” Bankman-Fried said. “And ultimately that’s what I care about the most. And, I mean, I think frankly that the blockchain industry could have a substantial positive impact. I was thinking a lot about, you know, bed nets and malaria; about, you know, saving people from diseases no one should die from.”
This is the language of effective altruism — or EA — a philanthropic movement premised on the use of reason and data to do good. Bankman-Fried had long flaunted his EA bona fides to distinguish himself from other crypto billionaires. Earn a lot to give a lot. Direct your bounty to where it will matter the most. Now his crypto exchange, FTX, has collapsed, wiping out small investors and wreaking havoc in the industry. To hear Bankman-Fried tell it, the idea was to make billions through his crypto-trading firm, Alameda Research, and FTX, the exchange he created for it — funnelling the proceeds into the humble cause of “bed nets and malaria,” thereby saving poor people’s lives.
But last summer, Bankman-Fried was telling The New Yorker’s Gideon Lewis-Kraus something quite different. “He told me that he never had a bed-nets phase, and considered neartermist causes — global health and poverty — to be more emotionally driven,” Mr Lewis-Kraus wrote in August. Effective altruists talk about both “neartermism” and “long termism.” Bankman-Fried said he wanted his money to address long termist threats including the dangers posed by artificial intelligence spiralling out of control. As he put it, funding for the eradication of tropical diseases should come from other people who actually cared about tropical diseases: “Like, not me or something.” (It looks increasingly unlikely that the nonprofits he started will be able to honour their financial commitments to his favoured causes.)
To the uninitiated, the fact that Bankman-Fried saw a special urgency in preventing killer robots from taking over the world might sound too outlandish to seem particularly effective or altruistic. But it turns out that some of the most influential EA literature happens to be preoccupied with killer robots, too.
The movement itself is still a big tent; there are effective altruists who remain dedicated to targeted interventions with proven results, including vaccination campaigns (and, of course, antimalarial bed nets). Mr Holden Karnofsky, a former hedge funder and a founder of GiveWell, an organisation that assesses the cost-effectiveness of charities, has spoken about the need for “worldview diversification” — recognizing that there might be multiple ways of doing measurable good in a world filled with suffering and uncertainty.
The books, however, are another matter. Considerations of immediate need pale next to speculations about existential risk — not just earthly concerns about climate change and pandemics but also (and perhaps most appealingly for some tech entrepreneurs) more extravagant theorising about space colonisation and AI. Sometimes the books put me in mind of a bunch of smart, well-intentioned people trying to impress and outdo one another by anticipating the next weird thing. Instead of “worldview diversification,” there’s a remarkable intellectual homogeneity; the dominant voices belong to white male philosophers at Oxford.
Mr Nick Bostrom’s “Superintelligence'' (2014) warns about the dangers posed by machines that might learn how to think better than we do; Mr Toby Ord’s “The Precipice” (2020) enumerates the cataclysms that could annihilate us. Professor William MacAskill has translated such doomsday portents into the friendlier language of can-do and how-to. In his recent bestseller, “What We Owe the Future '' (2022), Professor MacAskill says that the case for effective altruism giving priority to the long termist view can be distilled into three simple sentences: “Future people count. There could be a lot of them. We can make their lives go better.”
At first glance, all of this looks straightforward enough. Professor MacAskill repeatedly calls long termism “common sense” and “intuitive.” But each of those terse sentences glosses over a host of additional questions, and it takes Professor MacAskill an entire book to address them. Take the notion that “future people count.” Leaving aside the possibility that the very contemplation of a hypothetical person may not, for some real people, be “intuitive” at all, another question remains: Do future people count for more or less than existing people count for right now?
This question is like an inflection point between neartermism and longtermism. Professor MacAskill cites philosopher Derek Parfit, whose ideas about population ethics in his 1984 book “Reasons and Persons” have been influential in EA. Parfit argued that an extinction-level event that destroyed 100 per cent of the population should worry us much more than a near-extinction event that spared a minuscule population (which would presumably go on to procreate), because the number of potential lives dwarfs the number of existing ones. There are 8 billion people in the world now; in “The Precipice,” Ord names Parfit as his mentor and encourages us to think about the “trillions of human lives” to come.
If you’re a utilitarian committed to “the greatest good for the greatest number,” the arithmetic looks irrefutable. The Times’ Ezra Klein has written about his support for effective altruism while also thoughtfully critiquing longtermism’s more fanatical expressions of “mathematical blackmail.” But to judge by much of the literature, it’s precisely the more categorical assertions of rationality that have endowed the movement with its intellectual cachet.
In 2015, Professor MacAskill published “Doing Good Better,” which is also about the virtues of effective altruism. His concerns in that book (blindness, deworming) seem downright quaint when compared with the astral-plane conjectures (AI, building an “interstellar civilization”) that he would go on to pursue in “What We Owe the Future.” Yet the upbeat prose style has stayed consistent. In both books, he emphasises the desirability of seeking out “neglectedness” — problems that haven’t attracted enough attention so that you, as an effective altruist, can be more “impactful.” So climate change, Professor MacAskill says, isn’t really where it’s at anymore; readers would do better to focus on “the issues around AI development,” which are “radically more neglected.”
The thinking is presented as precise and neat. Like Bostrom and Ord (and Parfit, for that matter), Professor MacAskill is an Oxford philosopher. He is also one of the founders of effective altruism — as well as the person who, in 2012, recruited an MIT undergraduate named Sam Bankman-Fried to the effective altruism cause.
At the time, the logic of Professor MacAskill’s recruiting strategy must have seemed impeccable. Among his EA innovations has been the career research organisation known as 80,000 Hours, which promotes “earning to give” — the idea that altruistic people should pursue careers that will earn them oodles of money, which they can then donate to EA causes.
“The conventional advice is that if you want to make a difference you should work in the nonprofit or public sector or work in corporate social responsibility,” Professor MacAskill writes in “Doing Good Better.” But conventional is boring, and if the maths tells you that your energies would be more effectively spent courting promising tech savants with sky-high earning potential, conventional probably won’t get you a lot of new recruits.
“Earning to give” has its roots in the work of radical utilitarian philosopher Peter Singer, whose 1972 essay “Famine, Affluence and Morality” has been a foundational EA text. It contains his parable of the drowning child: If you’re walking past a shallow pond and see a child drowning, you should wade in and save the child, even if it means muddying your clothes. Extrapolating from that principle suggests that if you can save a life by donating an amount of money that won’t pose any significant problems for you, a decision not to donate that money would be not only uncharitable or ungenerous but morally wrong.
Singer has also written his own book about effective altruism, “The Most Good You Can Do” (2015), in which he argues that going into finance would be an excellent career choice for the aspiring effective altruist. He acknowledges the risks for harm, but he deems them worth it. Chances are, if you don’t become a charity worker, someone else will ably do the job, whereas if you don’t become a financier who gives his money away, who’s to say that the person who does become a financier won’t hoard all his riches for himself?
Still, some people need to become philosopher-influencers in order to spread the word. “Will isn’t in finance,” Singer writes, referring specifically to Professor MacAskill. “That’s because he believes that if he can influence two other people with earning capacities similar to his own to earn to give, he will have done more good than if he had gone into finance himself.”
Or maybe not.
On Nov 11, when FTX filed for bankruptcy amid allegations of financial impropriety, Professor MacAskill wrote a long Twitter thread expressing his shock and anguish as he wrestled in real time with what Bankman-Fried had wrought.
“If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community,” Professor MacAskill wrote in a Tweet, followed by screenshots from “What We Owe the Future” and Ord’s “The Precipice” that emphasised the importance of honesty and integrity.
I’m guessing that Bankman-Fried may not have read the pertinent parts of those books — if, that is, he read any parts of those books at all. “I would never read a book,” Bankman-Fried said this year. “I’m very sceptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.”
Avoiding books is an efficient method for absorbing the crudest version of effective altruism while gliding past the caveats. In the paperback edition of “Superintelligence,” which laid out a framework for thinking about a robot apocalypse, Bostrom delivers the equivalent of a warning label to “those whose lives have become so busy that they have ceased to actually read the books they buy, except perhaps for a glance at the table of contents and the stuff at the front and toward the back.”
But the books themselves may have incentivized blind spots of their own. For all of Professor MacAskill’s galaxy-brain disquisitions on “AI takeover” and the “moral case for space settlement,” perhaps the EA fixation on “neglectedness” and existential risks made him less attentive to more familiar risks — human, banal and closer to home.
This article originally appeared in The New York Times.