Sam Altman’s Dangerous Singularity Delusions


The CEO of OpenAI, Sam Altman , is on a messianic mission to bring about the Singularity, the moment at which artificial intelligence begins to self-improve. If AI is smart enough to build the next generation of even smarter AI systems, this will trigger an “intelligence explosion” resulting in an artificial superintelligence that is more “intelligent” than all of humanity combined.

Some call this “ god-like AI .” Elon Musk describes it as “ basically a digital god .” Many people, including Altman, argue that ASI will either annihilate humanity or usher in a utopian world of radical abundance , unlimited energy , immortality and cosmic delights beyond our wildest imaginations. “I think the good case,” Altman says , “is just so unbelievably good that you sound like a really crazy person to start talking about it.” “The bad case,” he adds, “is, like, lights out for all of us.”

What everyone misses about Altman’s “good case” scenario is that it would also result in the extinction of our species. His version of “utopia” would entail the complete disappearance of humanity. In a 2017 blog post titled “ The Merge ,” he writes:

We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.

In other words, we can die out once ASI arrives, or we can “survive” by “merging” with AI. This is “probably our best-case scenario” for making it in the post-Singularity world.

Altman says that “merging” with AI “can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot.”  Becoming best buddies with AI doesn’t sound like a true merge, though. I know of people who’ve developed intimate relationships with AI, but I wouldn’t consider them as having merged with the machines.

What Altman is really getting at is far more radical. Elsewhere in the essay, he writes that

if two different species both want the same thing and only one can have it — in this case, to be the dominant species on the planet and beyond — they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

The two “species” here are humans and ASI. Both want to dominate, Altman says, but only one can. Since there’s no way for ASI to become a biological human, the only other option is for humans to become digital beings like the ASI. That’s the sole way for us to form “one team” — humanity becoming the new species to which ASI belongs.

Altman says as much in a 2016 interview with The New Yorker. “We need to level up humans,” he declares, “because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever.” He elaborates: “The merge has begun — and a merge is our best scenario. Any version without a merge will have conflict: we enslave the AI or it enslaves us. The full-on-crazy version of the merge is we get our brains uploaded into the cloud,” to which he adds, “I’d love that.” Two years later, he signed up with a startup called Nectome to have his brain digitized when he dies, something he believes will become feasible in the near future. Altman is preparing to become an AI himself. “The merge has begun — and a merge is our best scenario.” In a separate New Yorker article published this year, Altman told Ronan Farrow and Andrew Marantz that his “definition of winning is that people crazy uplevel — and the insane sci-fi future comes true for all of us.” In other words, he wants a world in which we all become disembodied digital minds existing on computer hardware, and sees this as the “best-case scenario” — the one in which we, in some sense, “survive” the history-rupturing Singularity event that he’s trying to bring about.

But would merging with AI actually guarantee “our” survival? No. If humanity were digitized, we would become an entirely different species — the same kind as ASI. Altman is thus saying that the only way for humanity to avoid extinction is to go extinct . Either the ASI “will kill us all,” or we will “uplevel” by abandoning our biological substrate and become something fundamentally different from Homo sapiens . Both cases will result in human extinction.

Yet most people around the world wouldn’t opt to become a disembodied digital brain. I certainly wouldn’t. Imagine being tortured forever in a digital dungeon by autocratic rulers who themselves would be immortal. This would become a very real risk if one were digitized, as software doesn’t age like biological organisms do.

And what happens to all the humans who choose not to “merge” with AI? They will, according to Altman, be “enslaved” by ASI and the digital people who’ve merged with it. Humanity will then “fade into an evolutionary tree branch,” i.e., die out.

Worse, there is no reason to expect that most people will be given the opportunity to become a digital brain in the first place. Why on earth would the greedy tech billionaires who control ASI allow the mass of poor people the world over to join them in their digital “utopia”? Of course they’ll restrict who has access. Utopia is an inherently exclusionary concept, as someone or something is always left out — otherwise it wouldn’t be utopia. Guess who’s left out of the “utopia” that Altman envisions? The 99% and, ultimately, humanity itself .

This is what I call a “ pro-extinctionist ” view. Pro-extinctionism is the claim that humanity should go extinct by being replaced with some form of “posthuman.” ASI would be posthuman, as would digital people of the sort that Altman hopes to become.

Let’s not mince words: Altman is a pro-extinctionist, though he’s generally careful not to publicly advertise this. He’s actively trying to build a superintelligent AI and trigger the Singularity, after which he claims that our only hope of “survival” will be to abandon our biology and radically transform ourselves into disembodied software running on computers. Those who choose not to digitize themselves — or who are denied this opportunity — will be enslaved before the human species is snuffed out forever. ASI would be posthuman, as would digital people of the sort that Altman hopes to become. Notice how this adds an extraordinary layer of insult to the ongoing injuries caused by AI. Altman is aware that AI is wreaking havoc on society. His company, OpenAI, is currently facing eight wrongful death lawsuits because people committed suicide or murdered others after ChatGPT encouraged them to do so. Far more have experienced episodes of psychosis due to AI. The internet is flooded with AI-generated disinformation and deepfakes. Jobs are disappearing. Data centers are polluting surrounding communities. AI is enabling mass surveillance and selecting military targets in Iran. And it poses a dire threat to civic institutions like the rule of law, free press and universities, as a recent study shows .

And yet we’re told these harms are justified by the unfathomable benefits of an impending “utopia.” That utopia, however, will entail the enslavement and extinction of our species — according to Altman himself. This is outrageous. It’s analogous to a surgeon sitting down with her patient and saying: “I know this procedure is going to hurt a lot. But it’s the only way to ensure that you die young.”

Few people in the media seem to have noticed Altman’s pro-extinctionist agenda. The public is largely unaware of the looming existential threat posed by the Singularity. If they were, surely there would be widespread social unrest. To save our species, we must act now to stop the ASI race through boycotts, protests and campaigns aimed at pressuring our elected leaders to do something. We haven’t crossed the Rubicon yet, but the hour is late.

The post Sam Altman’s Dangerous Singularity Delusions appeared first on Truthdig .

Published: Modified: Back to Voices