When the Hungarian writer László Krasznahorkai said he could no longer speak about hope and turned instead to angels in his Nobel Prize lecture on December 7, he was really talking about mediation. The old angels brought speech from a transcendent ‘above’ and their very existence meant the world had a direction and a scale — towards something higher than us, guided by messages we could only receive. Krasznahorkai’s new angels, however, he said, have no discernible origin and no message to deliver. They move among us as a fragile, wounded people that can be destroyed by a word or a small humiliation. In his telling, they are sacrifices “because of us”.
Does this imagery sound familiar? It may well describe human lives in the age of artificial intelligence (AI), with people lost inside the systems they built as content moderators, clickworkers, data-labellers, gig workers — the people whose lives machines quietly profile and food into welfare algorithms predictive policing systems, their data scraped and sold off to train models that will then be used to govern them.
The “new angels” are not machines: they are the ones made expendable by the machines’ logic — even if At systems are unsettlingly close to a sort of counterfeit angelic form. They appear out of an abstract “cloud”, speak fluently in every register, and occupy the position of the messenger without offering messages of their own. Krasznahorkai’s angels are silent and demand a message from us that we no longer have. AI, however, will not shut up even as it hollows out our own capacity to speak from a grounded place. It simulates advice, empathy, knowledge, and even moral reasoning, but all as frightfully dull recombinations of text within a techno-economic stack that remains opaque. AI thus embodies precisely the loss he mourns: authority without responsibility, language without speech.
NVIDIA CEO Jensen Huang introduces an “Industrial AI Cloud” project during a press conference in Berlin, Germany, November 4, 2025.
| Photo Credit:
Reuters
Krasznahorkai’s lecture was really a long hymn to the dignity and exhaustion of the human species, and much of it landed very close to current debates about AI. He listed the astonishing run of human inventions — from art to philosophy, agriculture to science — before he turned to the present, where the same species has built devices to leave itself with only short-term memory. Is that not an accurate description of the attention economy AI has parachuted into and is now being used to supercharge? Developers and businesses are building AI models into feeds, search engines, advertising, and productivity tools, all to push us to have faster, more fragmented interactions.
But there is a second, more brutal layer. To train very large models you need massive, already existing stores of language. And Big Tech is treating what Krasznahorkai called the “noble and common possession of knowledge and beauty” as free raw material. The same civilisation that once struggled to create those works now builds systems that can cheaply imitate their surface forms while drawing value away from the institutions and labour that produce them. It is yet again “sacrifices because of us” — the cultural commons and its workers being consumed to fuel the appearance of infinite, effortless intelligence.
The U-Bahn scene in particular threw a peculiar light on AI governance. At an underground station in Berlin in the 1990s, Krasznahorkai recalled watching a homeless man painfully urinating on the platform’s ‘forbidden zone’ while a distant policeman rushed to punish him. The policeman on the platform was “the good sanctioned by all” the bearer of law and order; the sick man urinating on the tracks was cast as evil. Ten metres of trench separate them. In lived time, the policeman would probably have caught him, but Krasznahorkai froze the image: in reality, good never reaches evil; the distance is unbridgeable.
We confront the same bridge between our apparatus of ethics boards, principles, regulations, and “alignment” on one hand and the mess of institutionalised harm, with exploitative supply chains, surveillance, disinformation, and militarisation, on the other. As long as the architecture stays the same, Krasznahorkai’s diagnosis went, the design that makes some bodies visible and punishable and others invisible and protected, the chase will go on forever. The good of regulations runs only within a structure that guarantees its failure.

Sam Altman, co-founder and CEO of OpenAI, sits in the audience before a panel discussion on the future of artificial intelligence at TU Berlin, February 2025.
| Photo Credit:
Getty Images

Debates around AI often shrink to technofixes, e.g. better benchmarks, safer outputs, slightly stricter rules for deployment, and so on. Krasznahorkai’s lecture is however a refusal to treat symptoms in isolation. The tools we call ‘AI’ are emerging from a civilisation that treats attention as a resource to mine and the vulnerable as acceptable losses. If the new angels are sacrifices because of us, an AI politics worthy of his terms would have to be a politics that reduces the number of sacrifices altogether, i.e. which investigates where and how data are taken, who labours in the shadows, who bears the environmental and social costs, and which uses are simply off-limits no matter how profitable they are.
Yet there is a strong sense that nobody can simply exit the AI train. States feel compelled to invest lest they fall behind. Companies feel compelled to deploy code lest they lose advantage. Individuals feel compelled to adopt lest they lose work. The final sum is a sort of minimal ethic: maintain your capacity for attention, for naming sacrifices, even if you cannot yet see a path outside.
The ultimate question is not whether ‘art’ or ‘human creativity’ will survive AI but whether the civilisation that deployed AI still has the imagination and moral vocabulary to send any real message at all, as much to the angels it is sacrificing as to the tools it is unleashing in its own name.
Published – December 09, 2025 03:09 pm IST