“I don’t know how to say this in a polite way, but Beff is evil,” says German-American AI entrepreneur, Connor Leahy.
The long-haired 28-year-old is talking both about and to his opponent in a YouTube debate, who goes by the pseudonym Beff Jezos, or just “Beff”.
He’s the founder of a controversial movement known as “e/acc”, short for “effective accelerationism”.
They’re wrestling for control of the AI steering wheel, and their stated goal is to hit the gas as hard as possible. In fact, their motto is “accelerate or die”.
The e/acc movement is full of AI industry insiders — including top engineers, investors, and executives.
They’re waging a war against the AI safety movement, and anyone arguing to slow down — the “doomers” and “decels” (short for decelerationists). Beff’s debate opponent Connor Leahy is just one of them.
And while this conflict is largely unnoticed outside of Silicon Valley, its outcome could have consequences for billions of people, if AI is half as potent a technology as they believe it to be.
If you ask e/acc, to slow down AI progress in the name of safety is to risk or even preclude the survival of the human species.
If you ask the AI safety camp, we’ll be lucky if the highly sophisticated AI they’re trying to build doesn’t destroy civilisation.
Both sides boast some of the brightest computer scientists of our time, and a high degree of influence in Silicon Valley.
So with all those hands on the wheel, who’s driving the car?
Who is e/acc?
E/acc can be hard to pin down, but beneath the jargon, the factions and the memes, there’s one conviction that unites them: when it comes to AI, faster is better.
Beyond that, they tend to have an unshakeable faith in free markets, and somewhat counterintuitively, a deep distrust of big tech companies.
Instead, they argue for decentralised control of important technology — especially AI.
E/acc’s biggest goal, like most of the tech industry in 2024, is Artificial General Intelligence, or AGI: AI that’s smarter than humans across the board.
Before 2030 is no longer considered a controversial prediction as to when it’ll arrive.
The most serious accusation levelled against e/acc is that they don’t care if AI replaces the human species.
Not all e/accs agree on this front — some think the more sophisticated AI of the future won’t be willing or even able to harm us.
Others think it might, but the reward is worth the risk.
And a third camp sees AI as the next logical stage of evolution.
They think humans have a responsibility, or maybe a cosmic destiny, to hold the door open for a higher intelligence, even if that means being replaced.
A statement of e/acc’s principles on its own website, states: “If every species in our evolutionary tree was scared of evolutionary forks from itself, our higher form of intelligence and civilisation as we know it would never have emerged.”
“Stop fighting the thermodynamic will of the Universe.”
E/acc sees itself as the antidote to an AI safety movement that’s run amok.
It believes a zealous but influential subset of “safetyists” has infiltrated AI boards and government advisory panels, effecting a paranoia and hesitancy about the very technology most likely to ensure humanity’s survival.
E/acc’s full name — Effective Accelerationism — is itself a riff on one of the main subsets of AI safety, known as Effective Altruism, or EA, which promotes a far more cautious approach to AI.
The most public moment so far in the war between AI safety and e/acc came late last year, during the board meltdown at OpenAI, best known as the maker of ChatGPT.
OpenAI’s CEO, Sam Altman, was fired by the board over concerns he wasn’t taking safety seriously enough.
Within days, almost all of OpenAI’s staff had threatened to walk out if the former CEO wasn’t re-instated.
After his return, the most safety-focussed board members, who’d led the push to fire Altman, were purged.
They don’t agree on much, but both sides of the war say that e/acc won that round.
The moment e/acc accelerated
The victory at OpenAI, played out on a global stage, brought new energy to the e/acc cause.
High profile figures in Silicon Valley started adding “e/acc” to their Twitter bios.
Renowned Venture Capitalist Marc Andreesen wrote a “techno-optimist manifesto” and Garry Tan, the CEO of the influential startup accelerator, Y Combinator, gave rousing speeches at e/acc events.
Grimes DJ-ed an e/acc party. There was merch. There was a flag bearing the libertarian symbol of the Gadsden snake.
At the centre of it all was a man known only as Beff Jezos — one of e/acc’s four founders. At the time, he was still anonymous, like most of the movement.
Beff’s avatar was a cartoon of an absurdly ripped Jeff Bezos, the CEO of Amazon, towering before an intergalactic backdrop.
On X, formerly known as Twitter, Beff Jezos is an ostentatious meme-lord, issuing edicts about e/acc philosophy, fighting with “doomers”, and worshipping the “thermodynamic god”.
The thermodynamic god is a kind of in-joke for e/acc; a reference to the laws of physics, and an embodiment of the transformational AI they want to see.
He posts constantly about the urgent need for more powerful AI, along with wild statements such as “I will die trying to make that happen”.
As e/acc’s profile grew, Beff Jezos seemed dangerous enough to Forbes magazine that in December 2023, one of its reporters did something drastic; citing the public interest, they doxxed him.
That’s when we met Guillaume Verdon.
The real Beff Jezos
In the days after he was doxxed, Guillaume Verdon found himself sitting opposite US podcast host Lex Fridman, embarking on a marathon interview about his origins and beliefs.
“Ever since I was a kid, I wanted to figure out a theory of everything, to understand the universe.”
It turned out the real Beff Jezos was a brilliant Quantum AI computing scientist.
He’s only in his early 30s, but he’d already held leadership roles at two cutting-edge companies owned by Google’s parent company, Alphabet.
Guillaume Verdon sounded nothing like his bombastic and polarising alter-ego Beff Jezos.
When pressed by Fridman about e/acc’s goals, beyond the obvious desire to accelerate technological progress, Verdon seemed to say that e/acc was about compromise.
“It’s trying to find a middle ground between e/acc and … AI safety,” he said.
“Having a movement that is opposite to … the mainstream narrative that was taking over Silicon Valley was important to shift the dynamic range of opinions.
“So it’s kind of we’re trying to bring balance to the force.”
Verdon went on to imply that some of e/acc is using shock tactics for clicks.
“It’s kind of a vector to spread the message.
“You get rewarded for polarisation … and so even though we started a movement that is literally trying to polarise the tech ecosystem, at the end of the day, it’s so that we can have a conversation and find an optimum together.”
If you take Guillaume Verdon at his word, e/acc’s not even out to win the war, necessarily.
He seems to be saying they just want a truce they can tolerate.
For all the talk of moderation, Verdon is also a subscriber to arguably the darkest part of e/acc’s vision — the part that sees humans as an opening act for a more sophisticated consciousness.
“I’m not trying to actually replace humans,” he insists during the debate with Connor Leahy.
But he doesn’t sound all that committed to preventing that either.
“I think there’s a lot of beauty in the universe … and it’s a beautiful machine that has created us.
“Part of e/acc is to appreciate this principle in a way that’s not just centred on humanity, but kind of broader.
“Because we cherish this beautiful state of matter we’re in, we kind of feel a responsibility to scale it in order to preserve it, because the options are to grow or die.”
‘Not all e/accs’
Haodong Mo, a software engineer living near Gladstone in Central Queensland, is not that kind of e/acc.
He’s preoccupied with what he sees as AI’s almost infinite potential to help humanity.
“You sort of get it into your head that, well, if there are these benefits, then we should pursue them,” he says.
“If we play our cards right, maybe the future, even the near future, will be unimaginably good.”
Delays to that future, for Haodong, constitute a special form of cruelty.
“Every single day, thousands of people around the world die of various reasons; preventable diseases, non-preventable diseases, war, conflict, much of it brought about by resource shortages.
“Every single day that you delay the creation of transformational artificial intelligence, which we believe will be able to solve all of these problems, you are allowing those people to die.”
In many ways, he’s a perfect fit for the movement.
But despite sharing many e/acc values, he’s not willing to call himself one.
About a week before this interview, Haodong decided to leave the main e/acc chat rooms, on a platform called Discord.
“First things first. It’s a cesspit,” he says.
“They have a high tolerance towards, very, very far right people and trolls.”
The final straw came, he says, when someone was advancing an anti-Semitic conspiracy theory that an evil Jewish cabal was trying to wipe out western civilisation.
It’s true that sexism, racism and general bigotry are regular features in the forum.
“I don’t want to be associated with a lot of these guys. They’re very extreme libertarian kooks.”
Despite all the cultural problems and sometimes controversial beliefs about the role of AI, e/acc sees itself as optimistic.
Perhaps the least threatening version of the e/acc worldview is captured in “WAGMI” — short for “we’re all going to make it”.
To that, the doomers of the world most often reply: “Sure, but what if you’re wrong?”
Who’s really winning the war?
Despite the apparent war, e/accs and doomers have a surprising amount in common.
Both movements are full of people working in tech, who have always believed in it as a force for good.
The question that separates the e/accs from the doomers is whether AI might be a special case — the first exception to the rule within our lifetimes.
Self-professed doomer Matt Fallshaw sees other common ground too.
“There’s a corner of e/acc … who really are imagining a future that’s quite similar to the future that I’m imagining.”
He’s referring to the belief that a super-intelligent AI might lead to the demise of humanity.
“And I find it horrifying, and they’re OK with it.”
As the co-founder of a global accessories brand called Bellroy, Matt’s a successful Australian entrepreneur.
Behind the scenes, he’s also been active for decades in AI safety circles.
“If we stay on the path that we are on now, I think we would have to be amazingly lucky not to create for ourselves, a future where literally, children don’t get to grow up to be adults.”
Matt is hopeful we’ve got a decade before reaching AGI, although he’s mentally prepared for much less.
“My near-term horizon is I think we’re going to wake up tomorrow [with it] but I’m not sure.
“We need to stop before a year. And that’s really, really quite frightening.”
Scanning recent e/acc posts on social media, there’s a broad triumphalist streak — a clear sense that they’re winning the war.
Matt Fallshaw agrees e/acc is in front, and not by a little.
“E/acc doesn’t have to do anything, they don’t have to achieve a goal,” he says.
“The default course is the thing that e/acc is arguing for; it is unrestrained progress to make more and more capable AIs.
“We are storming ahead at an alarming pace and I would love some more time.”