Palantir's Palestine: How AI Gods Are Building Our Extinction
The machines are not coming for us. They are already here. And the men who control them have made their intentions terrifyingly clear.
There is a moment in every civilization’s collapse when the instruments of its destruction become visible to those paying attention. We are living in that moment now. But the warning signs are not carved in stone or written in prophecy—they are embedded in source code, amplified by algorithms, and funded by men who speak openly of human extinction while racing to cause it.
In a nondescript office in Palo Alto, a man who claims to fear fascism has become its most sophisticated architect. In a sprawling Texas compound, another man who styles himself a free speech absolutist uses his platform to amplify the voices calling for ethnic cleansing. And in the bombed-out hospitals of Gaza, their technologies converge in a laboratory of horrors that prefigures what awaits us all.
The four horsemen of this apocalypse do not ride horses. They deploy algorithms.
The Confession
Professor Stuart Russell has spent fifty years studying artificial intelligence. He wrote the textbook from which nearly every AI CEO in Silicon Valley learned their craft. And now, at eighty hours a week, he works not to advance the field he helped create, but to prevent it from annihilating the species.
“They are playing Russian roulette with every human being on Earth,” Russell said in a recent interview, his voice carrying the weight of someone who has seen the calculations and understood their implications. “Without our permission. They’re coming into our houses, putting a gun to the head of our children, pulling the trigger, and saying, ‘Well, you know, possibly everyone will die. Oops. But possibly we’ll get incredibly rich.’”
This is not hyperbole from an outsider. This is the assessment of a man whose students now run the companies building these systems. And here is what should terrify you: the CEOs themselves agree with him.
Dario Amodei, CEO of Anthropic, estimates a 25% chance of human extinction from AI. Elon Musk puts it at 20-30%. Sam Altman, before becoming CEO of OpenAI, declared that creating superhuman intelligence is “the biggest risk to human existence that there is.”
Twenty-five percent. Thirty percent. These are not the odds of a coin flip. These are the odds of Russian roulette with two bullets in the chamber. And yet they continue to spin the cylinder.
When Russell was asked if he would press a button to stop all AI progress forever, he hesitated—not because he believes the technology is safe, but because he still harbors hope that humanity might pull out of what he calls “this nosedive.” Asked again a year from now, he admits, he might give a different answer.
“Ask me again in a year,” he said. “I might say, ‘Okay, we do need to press the button.’”
But there may not be a button. There may not be a year. The event horizon, as Altman himself has written, may already be behind us.
The Gorilla Problem
Russell offers what he calls “the gorilla problem” as a framework for understanding our predicament. A few million years ago, the human line branched off from the gorilla line in evolution. Today, gorillas have no say in whether they continue to exist. We are simply too intelligent, too capable, too dominant for their survival to be anything other than a matter of our sufferance. We decide if gorillas survive or go extinct. For now, we let them live.
“Intelligence is actually the single most important factor to control planet Earth,” Russell explains. “And we’re in the process of making something more intelligent than us.”
The logic is inescapable. If we create entities more capable than ourselves, we become the gorillas. And gorillas cannot negotiate the terms of their extinction.
But here is where Russell’s framework is lacking and, in my personal opinion, requires expansion. The gorillas face one superior species. We face something far more insidious: a superior intelligence controlled by a handful of men whose values, as demonstrated by their actions, are antithetical to human flourishing.
The gorillas, at least, are threatened by humanity in the aggregate. We are threatened by humanity’s worst specimens, amplified by technologies that multiply their power beyond anything history has witnessed.
“These bombs are cheaper and you don’t want to waste expensive bombs on unimportant people”
The Men Behind the Curtain
Alexander Karp was born to activists. His mother, an African American artist, created works depicting the suffering of Black children murdered in Atlanta. His father, a German Jewish immigrant, worked as a pediatrician. They took young Alex to civil rights marches, exposed him to injustice, taught him to fight against oppression.
And then he grew up to build Palantir.
Named after the Seeing Stones of Tolkien’s legendarium—artifacts that were “meant to be used for good purposes” but proved “potentially very dangerous”—Palantir was founded in the aftermath of September 11th, 2001, with seed money from In-Q-Tel, the CIA’s venture capital arm. Karp, who claims he “cannot do something I do not believe in,” has spent two decades doing precisely that.
The company’s software now powers what Israeli soldiers describe with chilling bureaucratic efficiency: “I would invest 20 seconds for each target and do dozens of them a day. I had zero added value as a human. Apart from being a stamp of approval.”
Twenty seconds. That is the value of a Palestinian life in the algorithmic calculus of Alex Karp’s creation. The machine decides who dies. The human merely clicks.
When whistleblowers revealed that Israeli intelligence officers were using “dumb bombs”—unguided munitions with no precision capability—on targets identified by Palantir’s AI, their justification was purely economic: “These bombs are cheaper and you don’t want to waste expensive bombs on unimportant people.”
Unimportant people. Children. Doctors. Journalists. Poets.
Karp has admitted, in a moment of rare candor: “I have asked myself if I were younger, at college, would I be protesting me?”
He knows the answer. We all know the answer. He simply does not care.
The Digital Brownshirts
Elon Musk presents himself as a different kind of tech titan—the quirky engineer, the Mars visionary, the champion of free speech who bought Twitter to liberate it from the “woke mind virus.” But Sky News recently conducted an experiment that strips away this carefully constructed persona.
Researchers created nine fresh accounts on X—Musk’s renamed platform—and left them running for a month. Three accounts followed left-leaning content. Three followed right-leaning content. Three followed only neutral accounts like sports and music.
Every single account, regardless of its stated preferences, was flooded with right-wing content. Users who followed only sports teams saw twice as much right-wing political content as left-wing. Even the left-leaning accounts were fed 40% right-wing material.
This is not organic engagement. This is algorithmic manipulation on a civilizational scale.
“If you open the app on your phone and you immediately see a news agenda maybe filled with more hate towards certain groups, it’s going to have an impact,” observed Bruce Daisley, the former head of Twitter for Europe, the Middle East, and Africa. “And that’s not to say that free speech can’t exist, but if eight million people every day are opening their phones to see a news agenda that maybe is right to the fringes of what we’re used to, at the very least, we should have some visibility of the impact that’s going to have on politics.”
Musk reinstated Tommy Robinson’s account—the far-right pro-Zionist agitator who organized 150,000 people to march through London calling for mass deportations. Robinson thanked Musk publicly. Musk reposted the thanks and declared it was time for “the English to ally with the hard men.”
Hard men. The historical euphemism for fascists.
When politicians Musk favors post content, their engagement skyrockets. When politicians he dislikes post identical amounts, their reach flatlines. This is not a town square. It is a propaganda machine with a proprietor who openly interferes in the politics of nations he does not inhabit, backing candidates he has never met, pushing ideologies that would have been considered fringe extremism a decade ago.
And here is the connection that matters: Musk is the CEO of xAI, OpenAI’s largest competitor. He has declared himself a 30% believer in human extinction from AI. And he is using the world’s most influential social media platform to promote the political movements most likely to strip away the regulations that might prevent that extinction.
The fascists have captured the algorithm.
The Laboratory of the Future
Dr. Ghada Karmi was a child in 1948 when she lost her homeland. She remembers enough to know that she lost her world. For seventy-seven years, she has watched as the mechanisms of Palestinian erasure evolved from rifles and bulldozers to algorithms and autonomous weapons systems.
“Zionism is evil,” she says with the quiet certainty of someone who has spent a lifetime studying its fruits. “It is purely evil. It has created disasters, misery, atrocities, wars, aggression, unhappiness, insecurity for millions of Palestinians and Arabs. This ideology has no place whatsoever in a just world. None. It has to go. It has to end. And it has to be removed. Even its memory has to go.”
But Zionism, in its current iteration, is not merely an ideology. It is a business model. It is a technology demonstration. It is the beta test for systems that will eventually be deployed everywhere.
The Israeli military’s Project Lavender uses AI to identify targets for assassination. Soldiers describe processing “dozens of them a day” with “zero added value as a human.” The algorithm marks. The human clicks. The bomb falls.
This is not a war. It is a sick twisted video game.
Palantir’s technology identifies the targets. Musk’s Starlink provides the communications. American military contractors supply the weapons. And the entire apparatus is funded by governments whose citizens have marched in the millions demanding it stop.
“The genocide has not provoked a change in the official attitude,” Dr. Karmi observes. “I’m astonished by this and it needs an explanation.”
The explanation is simpler and more terrifying than any conspiracy. The explanation is that the people who control these technologies have decided that some lives are worth twenty seconds of consideration and others are worth none at all. And the governments that might regulate them have been captured by men waving fifty billion dollar checks.
“They dangle fifty billion dollar checks in front of the governments,” Professor Russell explains. “On the other side, you’ve got very well-meaning, brilliant scientists like Jeff Hinton saying, actually, no, this is the end of the human race. But Jeff doesn’t have a fifty billion dollar check.”
The King Midas Problem
Russell invokes the legend of King Midas to explain the trap we have built for ourselves. Midas wished that everything he touched would turn to gold. And it did. And then he touched his water and it became metal. He touched his food and it became inedible. He touched his daughter and she became a statue.
“He dies in misery and starvation,” Russell recounts. “So this applies to our current situation in two ways. One is that greed is driving these companies to pursue technology with probabilities of extinction being worse than playing Russian roulette. And people are just fooling themselves if they think it’s naturally going to be controllable.”
The CEOs know this. They have signed statements acknowledging it. They estimate the odds of catastrophe at one in four, one in three, and they continue anyway.
Why?
Because the economic value of AGI—artificial general intelligence—has been estimated at fifteen quadrillion dollars. This sum acts, in Russell’s metaphor, as “a giant magnet in the future. We’re being pulled towards it. And the closer we get, the stronger the force, the probability, the higher the probability that we will actually get there.”
Fifteen quadrillion dollars. For comparison, the Manhattan Project cost roughly thirty billion in today’s dollars. The budget for AGI development next year will be a trillion dollars. Thirty times the investment that built the atomic bomb.
And unlike the Manhattan Project, which was conducted in secret by a nation at war, this development is being conducted by private companies answerable only to their shareholders, in peacetime, with no democratic oversight, no regulatory framework, and no meaningful safety requirements.
“The people developing the AI systems,” Russell observes, “they don’t even understand how the AI systems work. So their 25% chance of extinction is just a seat of the pants guess. They actually have no idea.”
No idea. But they’re spending a trillion dollars anyway. Because the magnet is too strong. Because the incentives are too powerful. Because they have convinced themselves that someone else will figure out the safety problem. Eventually. Probably. Maybe.
What Now?
If everything goes right—if somehow we solve the control problem, if somehow we prevent extinction, if somehow we navigate the transition to artificial general intelligence without destroying ourselves—what then?
Russell has asked this question to AI researchers, economists, science fiction writers, futurists. “No one has been able to describe that world,” he admits. “I’m not saying it’s not possible. I’m just saying I’ve asked hundreds of people in multiple workshops. It does not, as far as I know, exist in science fiction.”
There is one series of novels, he notes, where humans and super intelligent AI coexist: Iain Banks’s Culture novels. “But the problem is, in that world there’s still nothing to do. To find purpose.”
The only humans with meaning are the 0.01% on the frontier, expanding the boundaries of galactic civilization. Everyone else is desperately trying to join that group “so they have some purpose in life.”
This is the best-case scenario. The utopia we’re racing toward is a cruise ship where the entertainment never ends and the meaning never arrives.
“Epstein is dead, or so we are told. But his network remains. His colleagues are still building. His vision of a world sorted into the served and the sacrificed is being encoded into algorithms at this very moment”
The Island
But we need not speculate about what happens when mankind runs out of meaning. We have already seen it. We have the receipts, the flight logs, the testimony of survivors. The men who have everything showed us what they do when nothing is forbidden.
Jeffrey Epstein’s island was not an aberration. It was a preview.
Here was a man connected to the CIA, to Mossad, to the highest levels of American political power. A man who, according to recently released emails, estimated that the federal government knew about roughly twenty of the children he had trafficked. A man whose black book read like a who’s who of global power: presidents, princes, tech billionaires, Nobel laureates.
The emails reveal something beyond mere criminality. They reveal an infrastructure. Epstein was, as media researcher Nolan Higdon documents, “someone who could find dirt on people and possibly destroy their image, and also was someone you could go to to protect people’s images as well.” He operated at the nexus of intelligence agencies, financial power, and technological development—advising on spyware, brokering deals between governments, connecting the men who would build the surveillance apparatus now pointed at all of us.
When ABC reporter Amy Robach had evidence of his sex crimes, the network killed the story. When accusers came forward, the New York Times dismissed their claims as baseless. When he was finally convicted, he received a sentence so lenient it became known as the “sweetheart deal.” And when he died in a federal prison under circumstances so suspicious that CBS News debunked every official explanation—the wrong floor on the released footage, a camera malfunction the manufacturer says is impossible—the investigation simply stopped.
The question is not whether Epstein was connected to these powerful figures. The emails have settled that. The question, as Higdon frames it, is how “one person could have his finger in so many pots with so many connections.” And the answer the media refuses to pursue is the obvious one: he was not operating alone. He was a node in a network—a network that included the intelligence agencies now partnering with AI companies, the billionaires now building our algorithmic future, the politicians now refusing to regulate any of it.
What did these men do when they had accumulated more wealth than could be spent in a thousand lifetimes? When they had shaped governments, launched technologies, bent the arc of history to their will?
They visited the island.
The film “Hostel” imagined wealthy elites paying to torture and kill ordinary people for sport. Critics dismissed it as horror movie excess. But the premise—that absolute power produces absolute depravity, that men who want for nothing will eventually want the forbidden—was not fiction. It was prophecy.
“What do you do when you have all the money in the world and all the power in the world?” asks Steve Grumbine, who has studied the Epstein files extensively. “Well, you do whatever you want to do. Absolute power corrupting absolutely.”
The children trafficked to that island were not incidental to the system. They were the system—the currency of compromise, the mechanism of control, the ultimate expression of what happens when a class of people comes to believe they are gods.
As I have written before: there is a reason why pedophiles turn out to be the most successful capitalists.
This is the future the AI accelerationists are building, whether they know it or not. A world where a handful of men control technologies of unprecedented power, answerable to no one, restrained by nothing, their every appetite indulged by machines that never refuse and never report. The Epstein island, scaled to planetary dimensions.
Epstein is dead, or so we are told. But his network remains. His colleagues are still building. His vision of a world sorted into the served and the sacrificed is being encoded into algorithms at this very moment.
When Peter Thiel, another acquaintance of Epstein and co-founder of Palantir, named his company after Tolkien’s seeing stones, he perhaps did not consider the full implications of the reference. In the novels, the Palantiri were corrupted—used by Sauron to show partial truths that led to despair and domination. Those who gazed into them saw what the Dark Lord wanted them to see.
We are all gazing into the stones now. And the men who control what we see — in these algorithmic Palantiri —have already shown us, on a Caribbean island and in the rubble of Gaza, exactly what they intend.
It looks like algorithms making life-and-death decisions with twenty seconds of human oversight. It looks like predictive policing in Florida, where residents are cited for overgrown grass because software flagged them as potential criminals. It looks like the hollowing out of every profession, every craft, every form of human contribution that might give us purpose. It looks like Palestinian children being raped without end inside the dark chambers of the IDF dungeons.
The Enablers
Dr. Karmi returns again and again to a simple question: Why?
“Why should a state that was invented, with an invented population, have become so important that we can’t live without it?” she asks of Israel. But the question applies equally to Silicon Valley, to the tech platforms, to the entire apparatus of algorithmic control that now shapes our politics, our perceptions, our possibilities.
The answer, she suggests, lies in understanding the enablers.
“I think it’s absolutely crucial now to focus on the enablers,” she argues. “Because we can go on and on giving examples of Israeli brutality, of the atrocities, of the cruelties. That’s not for me the point. The point is who is allowing this to happen?”
The same question must be asked of AI. Who is allowing this to happen? Who is funding the companies that acknowledge a 25% chance of human extinction and continue anyway? Who is providing the regulatory vacuum in which these technologies develop unchecked? Who is amplifying the voices calling for acceleration while silencing those calling for caution?
The answer is the same class of people who have enabled every catastrophe of the modern era: the comfortable, the compliant, the compromised. The politicians who take the fifty billion dollar checks. The journalists who amplify the preferred narratives. The citizens who scroll past the warnings because they are too busy, too distracted, too convinced that someone else will handle it.
“All the polls that have been done say most people, 80% maybe, don’t want there to be super intelligent machines,” Russell notes. “But they don’t know what to do.”
They don’t know what to do. So they do nothing. And the machines keep learning. And the algorithms keep shaping. And the billionaires keep abusing. And the bombs keep falling. And the future keeps narrowing.
The Resistance
What is to be done?
Russell’s advice is almost quaint in its simplicity: “Talk to your representative, your MP, your congressperson. Because I think the policymakers need to hear from people. The only voices they’re hearing right now are the tech companies and their fifty billion dollar checks.”
Dr. Karmi offers something similar: “My advice is to target the official structures which keep Israel going. They need to understand that being nice to Palestinians or sending food or whatever is fine, but it is not the point. The point for people living in western democracies is they can express a view.”
The counterargument is obvious: these structures are captured. The platforms that might amplify our voices are controlled by the very forces we need to resist. The politicians who might act are bought. The media that might inform are complicit.
But the counterargument misses the point. The point is not that resistance will succeed. The point is that resistance is the only thing that might succeed.
“I’m not sure what to do,” Russell admits, “because of this giant magnet pulling everyone forward and the vast sums of money being put into this. But I am sure that if you want to have a future, and a world that you want your kids to live in, you need to make your voice heard.”
What does that look like?
It looks like refusing to use platforms designed to indoctrinate us. It looks like demanding that our representatives explain their positions on AI safety. It looks like supporting the whistleblowers who reveal what these companies are doing. It looks like building alternative structures that do not depend on the benevolence of billionaires.
It looks like refusing to be gorillas.
The Choice
Alex Karp’s mother devoted her art to documenting the suffering of Black children murdered in Atlanta. His father spent his career caring for the sick. They taught him to march against injustice.
And he built a machine that decides, in twenty seconds, which children in Gaza will die today.
Elon Musk claims to champion free speech. He claims to fear the extinction of humanity. He claims to want to preserve western (un)civilization.
And he uses his platform to amplify the voices calling for ethnic cleansing, to boost the politicians who would eliminate the regulations that might prevent catastrophe, to reshape the information environment of entire nations according to his preferences.
Stuart Russell has spent fifty years in artificial intelligence. He could retire. He could play golf. He could sail.
And instead he works eighty hours a week, trying to divert humanity from a course he believes leads to extinction.
These are the choices that matter. Not the abstract debates about technology, but the concrete decisions about what we do with our one life, our one moment of influence, our one chance to shape what comes next.
“There isn’t a bigger motivation than this,” Russell says simply. “It’s not only the right thing to do, it’s completely essential.”
The gorillas could not choose their fate. They were outcompeted by a species more intelligent than themselves, and now their survival depends entirely on whether that species decides to permit it.
We still have a choice. The machines are not yet smarter than us. The algorithms are not yet in complete control. The billionaires are not yet omnipotent.
But the window is closing. The event horizon may already be behind us. And the men who control the most powerful technologies in human history have made their values abundantly clear.
They will pursue profit over safety. They will amplify hatred over tolerance. They will choose rape over romance. They will enable genocide if the margins are favorable. They will risk extinction if the upside is sufficient.
This is not speculation. This is the record. This is what they are doing, right now, in plain sight.
The question is not whether we understand the danger. The question is what we will do about it.
In the rubble of Gaza, AI systems are learning. They are learning that human life can be processed in twenty seconds. They are learning that some people are worth expensive bombs and others are not. They are learning that the international community will watch and do nothing.
What they learn there, they will eventually apply everywhere.
This is not a warning about the future. It is a description of the present. The future is merely the present, continued, worse.
Unless we stop it.
Unless we choose differently.
Unless we refuse to become the gorillas.
- Karim.
* To increase the visibility of BettBeat Media, your restack of this article would be greatly appreciated.
Your support today helps us maintain our founding principle: quality analysis available to everyone, regardless of financial means. Honor the path the early supporters have blazed by becoming a paid subscriber—together, we can build a sustainable model that respects both our work and our community’s diverse economic realities.









Everything you write is brilliant. This one is brilliant and super scary. Thanks for all the sleepless nights. From an awake gorilla howling into the abyss.
Good advise on stopping the rush to Armageddon- contact elected representatives to prioritize citizens' voices over those of wealthy corporate executives.