All Essays
Essay 01  ·  February 20026

Who is Going to Burn on the AI Stake?

Copernicus waited 13 years to publish. Galileo recanted. Bruno burned. The pattern of identity-level disruption has always been the same. Only the speed has changed.

Theo van der Westhuizen ~12 min read

Frombork, Poland. May, 1543.

Nicolaus Copernicus wakes before dawn, as he always does. He is seventy years old, his body failing in ways he has stopped cataloguing, and he has work to do before the morning canons gather for Lauds. He dresses in the dark. He is a canon of the Frombork Cathedral, administrator of church properties, part-time physician to the local poor, and — in the hours before the diocese requires him — an amateur astronomer who has spent the better part of forty years staring at the Baltic sky from a turret he had built into the cathedral wall.

Three weeks ago, a messenger arrived from Nuremberg with a printed copy of his book. De Revolutionibus Orbium Coelestium. On the Revolutions of the Celestial Spheres. He had held it in his hands and wept, though whether from pride or relief or the particular exhaustion of a man who has carried a secret for forty years is not recorded. He had dedicated it to Pope Paul III. He had buried the most dangerous claim — that the Earth moves around the Sun, not the other way — inside dense mathematics that only a handful of people in Europe could follow. He had done everything a careful man could do.

The response, in three weeks, has been essentially nothing. A few letters from scholars. Polite interest. The Pope had not written. The Church had not written. The silence was enormous and, to Copernicus, probably a relief.

He would be dead within weeks. He never knew what he had started.

The most consequential scientific document in a millennium landed with the quiet of a stone dropped into deep water. The ripples were already moving. He just couldn't see them from the surface.

— ◈ —

London. October, 2015.

In a converted office building in King's Cross — the kind of space that still smells faintly of the railway station it used to serve — a group of researchers at a company called DeepMind are watching a computer program learn to play Atari video games.

They are, by any reasonable measure, a ragged bunch. The hours they keep would alarm a physician. The empty coffee cups have achieved a kind of geological layering on the desks. The screens glow with something that looks, to an outsider, like a very simple video game from 1978. Breakout. The ball moves. The paddle swings. Points accumulate.

Except the paddle is not being controlled by a person.

It is being controlled by a system that, forty-eight hours ago, had never seen the game. It had been given the screen pixels and the score. Nothing else. No instructions. No strategy. No human demonstration. It had simply played, and failed, and played again, a hundred thousand times, until it had discovered something that no programmer had told it: that if you tunnel through the side wall and get the ball behind the bricks, you can clear the entire board with almost no effort at all.

The researchers are not celebrating. They are quiet in the way that people get quiet when they see something that doesn't fit inside the available categories. One of them will say later that it felt less like watching a machine learn a game and more like watching something think.

The paper they publish will be received with significant interest in academic circles. The general public will not particularly notice. There are no congressional hearings. No papal letters. The stone enters the water without drama.

The ripples are already moving.

— ◈ —

The Sequel Nobody Wanted

Back in Europe, fifty years after Copernicus died in his sleep, the stone he dropped has been gathering speed.

Galileo Galilei picks up where Copernicus left off and immediately makes every strategic mistake a man can make. He is brilliant, combative, and constitutionally unable to tolerate people who disagree with him. He writes in Italian rather than Latin — the vernacular, the language of ordinary people, the language of the street — because he wants everyone to understand. He builds telescopes and points them at the sky and reports what he sees with the enthusiasm of a man who believes that truth is its own protection.

It is not.

The Church tries him in 1633. He is forced to kneel before the Inquisition and recant everything. Under house arrest for the remaining nine years of his life, he keeps working. The story goes that as he rose from his knees, having sworn that the Earth does not move, he muttered under his breath: and yet it moves. Historians doubt the quote. It is almost certainly invented. It survives because it captures something true about the man regardless.

He is not burned. He is managed. Which, for someone like Galileo, is its own kind of fire.

Then comes Giordano Bruno, and the story changes register entirely.

Bruno had been a Dominican monk who abandoned his monastery in his twenties and spent the next two decades walking across Europe saying things that nobody wanted to hear. Not just that the Earth moves — he was well past that. He said the Sun was just one star among an infinite number of others. That the universe had no centre. That there were likely other worlds, other forms of intelligence, scattered across a cosmos so vast that the Church's claim to explain it was not just wrong but absurd.

He said this in public. He said it repeatedly. He said it in cities that had told him to leave and then returned to say it again. He was incapable of the strategic silence that had kept Copernicus safe and the tactical recantation that had kept Galileo alive. Whether this was courage or something closer to compulsion, nobody can say with confidence.

In February of 1600, they burned him alive in the Campo de' Fiori in Rome. His ashes were scattered in the Tiber so that nothing could be preserved, venerated, or turned into a relic of resistance.

The Church erected a statue of him on that exact spot in 1889. It is still there. They built it themselves.

— ◈ —

San Francisco. November, 2022.

The office is in the Mission District and it is not yet morning when the first numbers come in. The team at OpenAI has just released something called ChatGPT. It is not their most technically sophisticated model. It is not, by internal measures, their most capable system. What it is, is accessible — a simple text box, a conversational interface, the kind of thing you can sit down with and use without reading a manual.

One million users sign up in five days.

One hundred million in two months.

The researchers and executives watching the dashboards have the look of people who dropped a stone into water and are only now seeing how far the ripples reach. Some of them are exhilarated. Some are frightened. Several will later say, in interviews and memoirs, that they had not anticipated the scale of the reaction — which is either honest or disingenuous, depending on who you believe.

What is certain is this: for the first time, the blow to humanity's belief in the uniqueness of its own cognition is arriving not in a physics paper or a research lab but in the hands of ordinary people.

A teacher in Ohio discovers that the system can write a better essay than most of her students. A junior lawyer in London discovers it can draft a contract in four minutes that would have taken him a day. A radiologist in Mumbai discovers it can read a scan with accuracy that matches his own. A copywriter in Johannesburg discovers that the work she spent a decade learning to do can be approximated, with some prompting, in seconds.

None of them know what to call what they are feeling. It is not quite fear. It is not quite awe. It is something that lives in the body before it reaches the mind — the particular vertigo of the ground shifting under your feet while you are still standing on it.

Copernicus had the decency to wait until he was dead before detonating the system.

This one arrived on a Tuesday afternoon with a text box and a blinking cursor.

— ◈ —

We Are Living in the Galilean Moment

This is where we are now. Not at the beginning. Not yet at the end. In the loud, disorienting middle — the Galilean middle — where the idea has escaped the lab and reached the street, where the institutions are circling, where everyone is becoming aware but nobody quite knows what to do with the awareness.

The powerful institutions are doing what powerful institutions always do in the Galilean moment: trying to figure out how to contain it without being seen to contain it.

The European Union produces the AI Act — hundreds of pages of regulatory architecture, liability frameworks, risk classifications. There is genuine safety thinking in it. There is also the unmistakable logic of an institution trying to determine who gets to say what the telescope is allowed to see.

The United States Congress summons Sam Altman — the man who, more than anyone, broadcast the revolution to a hundred million ordinary people — and asks him questions that reveal, with uncomfortable clarity, that the questioners do not fully understand what they are asking about. It is Galileo before the Inquisition, except the Inquisition is working from printed Wikipedia summaries and the hearing keeps running over time.

The universities announce AI policies. The hospitals form ethics committees. The law firms publish position papers. The consulting firms release frameworks. Everyone is generating language as fast as possible, and the language is doing what language always does in a Galilean moment: creating the appearance of comprehension while the actual reckoning is deferred.

Meanwhile the pharmaceutical industry — one of the most profitable power structures in human history, an industry whose entire economic architecture depends on the scarcity and expense of drug development — watches AI begin to do in months what used to take decades. Protein structures solved overnight. Novel compounds identified in disease categories that have resisted treatment for generations. The tools are extraordinary. The implications for an industry built on patent exclusivity and billion-dollar clinical trial budgets are existential.

The industry does not panic. It acquires. It integrates on its own terms. It hires the researchers and buys the startups and lobbies the regulators to ensure that AI-derived compounds face approval processes calibrated for a world where drug development was slow and expensive. It is doing what the Church did with Copernicus — absorbing the blow quietly, in Latin, in ways the general public won't notice until the consequences are already settled.

And in offices and hospitals and schools and law firms in every city on earth, ordinary people are sitting with something they cannot name.

They are not Galileo. They did not broadcast anything.

They are not going to be burned for their beliefs.

They are the people in the crowd, feeling the ground shift, going home, making dinner, coming back tomorrow. The majority. The ones the wave is actually rolling. And what they are experiencing is not a skills gap, not a change management challenge, not a transition that a well-designed workshop will resolve.

It is what happens when the cosmology you organised your life around begins to fail. Quietly. Personally. At your specific desk, on a specific afternoon, when you run the same task through ChatGPT on a whim and the result is better than yours.

The Galilean moment is also the moment when this disquiet is most likely to be managed rather than held. When organisations respond to identity-level disruption with productivity frameworks and adoption metrics. When the people asking the real questions are told, gently, to reframe more positively. When the ground is shifting and the official message is: please remain calm, the ground is not shifting.

— ◈ —

The First Taste of Bruno Burning

In early 2024, OpenAI quietly revised its usage policies. The prohibition on military and national security applications — language that had been in the policy since the beginning, language that had done some moral work even if imperfectly — was removed.

Sam Altman, who had sat before the United States Senate and spoken with apparent conviction about existential risk, about the responsibility of AI developers, about the importance of safety, had agreed to let the most powerful military in human history use his tools.

It was not announced. It was a policy document. The kind of thing that surfaces on a Tuesday in a newsletter that most people never read.

People read it.

The cancellations began. Not a trickle — a movement. Users who had built their workflows around ChatGPT, who had paid for subscriptions, who had no particular ideological axe to grind, moved. Quietly, individually, without coordinating, they moved. To Claude. To other alternatives. Away.

Altman walked elements of it back. The policy shifted again. The statements multiplied.

But something had happened that couldn't be un-happened. An ethical line had been crossed in public, and the public — not the regulators, not the ethicists, not the governments — had responded with the only instrument ordinary people have: their attention, their money, their willingness to stay.

Anthropic, the company formed explicitly by people who had left OpenAI over safety concerns, found itself the beneficiary of a migration it had not engineered. People moved not because Claude was technically superior in every dimension. They moved because at a critical moment, Anthropic had held a line. Had absorbed pressure from the most powerful government on earth and not fully capitulated.

It was imperfect. Anthropic has investors. Anthropic has its own contradictions and compromises. No institution is the uncomplicated hero in this story.

But the people moving subscriptions were not naive. They understood the imperfection. They moved anyway. Because in the gap between one cosmology failing and another not yet available, the signal that someone is trying to use this power ethically is worth something. Even provisionally. Even knowing it might not last.

This is the first taste of burning. The ash is reputational, financial, social — but it is real, and it was caused not by bringing a dangerous truth into the world but by refusing to insist that the dangerous truth carried moral consequences for how power is exercised.

And elsewhere, right now, there is something darker. Autonomous weapons systems are already making targeting decisions in active conflict zones. Not in a lab. Not in a white paper. In the field, at speed, at a scale no human command structure can fully supervise. The Church isn't just deciding what the telescope can see anymore. It is deciding what the telescope can aim at.

And yet.

The same technology is finding cancers earlier than any radiologist could. Compressing drug discovery from decades into months. Giving a child in a city with no good schools access to a tutor who never tires and never judges. These are not talking points. They are also true.

This is what makes the Galilean moment so disorienting. If AI were only dangerous, the response would be obvious. If it were only miraculous, the response would be easy. It is both, simultaneously, at full volume, and the people with their hands on it are making irreversible choices right now, in real time, with inadequate oversight and more concentrated power than any institution in history has acquired this quickly.

We don't know how the burning arrives. We didn't know in 1600 either. Bruno didn't wake up on the morning of February 17th knowing it was the last morning. He probably thought he was still in the Galilean moment — still in the phase of argument and counter-argument, still in the period where talking was possible.

He was wrong about that.

— ◈ —

In the middle of all of this are most people.

Not the researchers. Not the executives. Not the philosophers willing to burn rather than stop saying what they see.

Just people. Carrying their expertise and their identity and their professional self-respect into a world that is renegotiating the terms without asking them. The analyst who built her career on synthesis. The manager whose value was always in the interpretation. The specialist who became irreplaceable over fifteen years and is now watching that word quietly lose its meaning.

They are not losing a skill. They are losing the story they told themselves about who they are and why they matter. That is a different kind of loss, and it does not respond to a reskilling programme.

The fight to stay relevant on the old terms is a losing fight. Not because these people lack capability — but because the terms themselves are changing faster than any individual can adapt to them. The ground is moving. The cosmology is failing. And the official message, in most organisations, is still: please remain calm, the ground is not shifting.

It is. They know it is. They can feel it in their bodies before they can name it in words.

— ◈ —

The Campo de' Fiori is full of tourists now. They eat gelato on the steps of Bruno's statue. They take photographs. Most of them don't know who he was, or what happened there, or that the institution that burned him built the memorial themselves — 289 years later, when it was finally safe to admit what they had done.

We are not 289 years away from that admission.

We are much closer to the fire.

LeaderCoherence
Get coherent.
See defences.
Dissolve them.
← All Essays leadercoherence.com →