Project Syndicate

The Post-Human Desert

The panic about AI stems from the fear that even those who are spearheading its progress will be unable to steer it. FILE PHOTO: REUTERS

The Future of Life Institute's open letter demanding a six-month precautionary pause on artificial-intelligence development has already been signed by thousands of high-profile figures, including Elon Musk. The signatories worry that AI labs are "locked in an out-of-control race" to develop and deploy increasingly powerful systems that no one – including their creators – can understand, predict, or control.

What explains this outburst of panic among a certain cohort of elites? Control and regulation are obviously at the centre of the story, but whose? During the proposed half-year pause when humanity can take stock of the risks, who will stand for humanity? Since AI labs in China, India, and Russia will continue their work (perhaps in secret), a global public debate on the issue is inconceivable.

Today's "post-human" sciences are no longer about domination. Their credo is surprise: what kind of contingent, unplanned emergent properties might "black-box" AI models acquire for themselves?

Still, we should consider what is at stake, here. In his 2015 book, Homo Deus, the historian Yuval Harari predicted that the most likely outcome of AI would be a radical division – much stronger than the class divide – within human society. Soon enough, biotechnology and computer algorithms will join their powers in producing "bodies, brains, and minds," resulting in a widening gap "between those who know how to engineer bodies and brains and those who do not." In such a world, "those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction."

The panic reflected in the AI letter stems from the fear that even those who are on the "train of progress" will be unable to steer it. Our current digital feudal masters are scared. What they want, however, is not public debate, but rather an agreement among governments and tech corporations to keep power where it belongs.

A massive expansion of AI capabilities is a serious threat to those in power – including those who develop, own, and control AI. It points to nothing less than the end of capitalism as we know it, manifest in the prospect of a self-reproducing AI system that will need less and less input from human agents (algorithmic market trading is merely the first step in this direction). The choice left to us will be between a new form of communism and uncontrollable chaos.

The new chatbots will offer many lonely (or not so lonely) people endless evenings of friendly dialogue about movies, books, cooking, or politics. To reuse an old metaphor of mine, what people will get is the AI version of decaffeinated coffee or sugar-free soda: a friendly neighbour with no skeletons in its closet, an Other that will simply accommodate itself to your own needs. There is a structure of fetishist disavowal here: "I know very well that I am not talking to a real person, but it feels as though I am – and without any of the accompanying risks!"

In any case, a close examination of the AI letter shows it to be yet another attempt at prohibiting the impossible. This is an old paradox: it is impossible for us, as humans, to participate in a post-human future, so we must prohibit its development. To orient ourselves around these technologies, we should ask Lenin's old question: Freedom for whom to do what? In what sense were we free before? Were we not already controlled much more than we realised? Instead of complaining about the threat to our freedom and dignity in the future, perhaps we should first consider what freedom means now. Until we do this, we will act like hysterics who, according to the French psychoanalyst Jacques Lacan, are desperate for a master, but only one that we can dominate.

The futurist Ray Kurzweil predicts that, owing to the exponential nature of technological progress, we will soon be dealing with "spiritual" machines that will not only display all the signs of self-awareness but also far surpass human intelligence. But one should not confuse this "post-human" stance for the paradigmatically modern preoccupation with achieving total technological domination over nature. What we are witnessing, instead, is a dialectical reversal of this process.

Today's "post-human" sciences are no longer about domination. Their credo is surprise: what kind of contingent, unplanned emergent properties might "black-box" AI models acquire for themselves? No one knows, and therein lies the thrill – or, indeed, the banality – of the entire enterprise.

Hence, earlier this century, the French philosopher-engineer Jean-Pierre Dupuy discerned in the new robotics, genetics, nanotechnology, artificial life, and AI a strange inversion of the traditional anthropocentric arrogance that technology enables:

"How are we to explain that science became such a 'risky' activity that, according to some top scientists, it poses today the principal threat to the survival of humanity? Some philosophers reply to this question by saying that Descartes's dream – 'to become master and possessor of nature' – has turned wrong, and that we should urgently return to the 'mastery of mastery.' They have understood nothing. They don't see that the technology profiling itself at our horizon through 'convergence' of all disciplines aims precisely at nonmastery. The engineer of tomorrow will not be a sorcerer's apprentice because of his negligence or ignorance, but by choice."

Humanity is creating its own god or devil. While the outcome cannot be predicted, one thing is certain. If something resembling "post-humanity" emerges as a collective fact, our worldview will lose all three of its defining, overlapping subjects: humanity, nature, and divinity. Our identity as humans can exist only against the background of impenetrable nature, but if life becomes something that can be fully manipulated by technology, it will lose its "natural" character. A fully controlled existence is one bereft of meaning, not to mention serendipity and wonder.

The same, of course, holds for any sense of the divine. The human experience of "god" has meaning only from the standpoint of human finitude and mortality. Once we become homo deus and create properties that seem "supernatural" from our old human standpoint, "gods" as we knew them will disappear. The question is what, if anything, will be left. Will we worship the AIs that we created?

If something resembling "post-humanity" emerges as a collective fact, our worldview will lose all three of its defining, overlapping subjects: humanity, nature, and divinity.

There is every reason to worry that tech-gnostic visions of a post-human world are ideological fantasies obfuscating the abyss that awaits us. Needless to say, it would take more than a six-month pause to ensure that humans do not become irrelevant, and their lives meaningless, in the not-too-distant future.

Slavoj Žižek, Professor of Philosophy at the European Graduate School, is International Director of the Birkbeck Institute for the Humanities at the University of London and the author, most recently, of "Heaven in Disorder" (OR Books, 2021).

Copyright: Project Syndicate, 2023.
www.project-syndicate.org

Comments

The Post-Human Desert

The panic about AI stems from the fear that even those who are spearheading its progress will be unable to steer it. FILE PHOTO: REUTERS

The Future of Life Institute's open letter demanding a six-month precautionary pause on artificial-intelligence development has already been signed by thousands of high-profile figures, including Elon Musk. The signatories worry that AI labs are "locked in an out-of-control race" to develop and deploy increasingly powerful systems that no one – including their creators – can understand, predict, or control.

What explains this outburst of panic among a certain cohort of elites? Control and regulation are obviously at the centre of the story, but whose? During the proposed half-year pause when humanity can take stock of the risks, who will stand for humanity? Since AI labs in China, India, and Russia will continue their work (perhaps in secret), a global public debate on the issue is inconceivable.

Today's "post-human" sciences are no longer about domination. Their credo is surprise: what kind of contingent, unplanned emergent properties might "black-box" AI models acquire for themselves?

Still, we should consider what is at stake, here. In his 2015 book, Homo Deus, the historian Yuval Harari predicted that the most likely outcome of AI would be a radical division – much stronger than the class divide – within human society. Soon enough, biotechnology and computer algorithms will join their powers in producing "bodies, brains, and minds," resulting in a widening gap "between those who know how to engineer bodies and brains and those who do not." In such a world, "those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction."

The panic reflected in the AI letter stems from the fear that even those who are on the "train of progress" will be unable to steer it. Our current digital feudal masters are scared. What they want, however, is not public debate, but rather an agreement among governments and tech corporations to keep power where it belongs.

A massive expansion of AI capabilities is a serious threat to those in power – including those who develop, own, and control AI. It points to nothing less than the end of capitalism as we know it, manifest in the prospect of a self-reproducing AI system that will need less and less input from human agents (algorithmic market trading is merely the first step in this direction). The choice left to us will be between a new form of communism and uncontrollable chaos.

The new chatbots will offer many lonely (or not so lonely) people endless evenings of friendly dialogue about movies, books, cooking, or politics. To reuse an old metaphor of mine, what people will get is the AI version of decaffeinated coffee or sugar-free soda: a friendly neighbour with no skeletons in its closet, an Other that will simply accommodate itself to your own needs. There is a structure of fetishist disavowal here: "I know very well that I am not talking to a real person, but it feels as though I am – and without any of the accompanying risks!"

In any case, a close examination of the AI letter shows it to be yet another attempt at prohibiting the impossible. This is an old paradox: it is impossible for us, as humans, to participate in a post-human future, so we must prohibit its development. To orient ourselves around these technologies, we should ask Lenin's old question: Freedom for whom to do what? In what sense were we free before? Were we not already controlled much more than we realised? Instead of complaining about the threat to our freedom and dignity in the future, perhaps we should first consider what freedom means now. Until we do this, we will act like hysterics who, according to the French psychoanalyst Jacques Lacan, are desperate for a master, but only one that we can dominate.

The futurist Ray Kurzweil predicts that, owing to the exponential nature of technological progress, we will soon be dealing with "spiritual" machines that will not only display all the signs of self-awareness but also far surpass human intelligence. But one should not confuse this "post-human" stance for the paradigmatically modern preoccupation with achieving total technological domination over nature. What we are witnessing, instead, is a dialectical reversal of this process.

Today's "post-human" sciences are no longer about domination. Their credo is surprise: what kind of contingent, unplanned emergent properties might "black-box" AI models acquire for themselves? No one knows, and therein lies the thrill – or, indeed, the banality – of the entire enterprise.

Hence, earlier this century, the French philosopher-engineer Jean-Pierre Dupuy discerned in the new robotics, genetics, nanotechnology, artificial life, and AI a strange inversion of the traditional anthropocentric arrogance that technology enables:

"How are we to explain that science became such a 'risky' activity that, according to some top scientists, it poses today the principal threat to the survival of humanity? Some philosophers reply to this question by saying that Descartes's dream – 'to become master and possessor of nature' – has turned wrong, and that we should urgently return to the 'mastery of mastery.' They have understood nothing. They don't see that the technology profiling itself at our horizon through 'convergence' of all disciplines aims precisely at nonmastery. The engineer of tomorrow will not be a sorcerer's apprentice because of his negligence or ignorance, but by choice."

Humanity is creating its own god or devil. While the outcome cannot be predicted, one thing is certain. If something resembling "post-humanity" emerges as a collective fact, our worldview will lose all three of its defining, overlapping subjects: humanity, nature, and divinity. Our identity as humans can exist only against the background of impenetrable nature, but if life becomes something that can be fully manipulated by technology, it will lose its "natural" character. A fully controlled existence is one bereft of meaning, not to mention serendipity and wonder.

The same, of course, holds for any sense of the divine. The human experience of "god" has meaning only from the standpoint of human finitude and mortality. Once we become homo deus and create properties that seem "supernatural" from our old human standpoint, "gods" as we knew them will disappear. The question is what, if anything, will be left. Will we worship the AIs that we created?

If something resembling "post-humanity" emerges as a collective fact, our worldview will lose all three of its defining, overlapping subjects: humanity, nature, and divinity.

There is every reason to worry that tech-gnostic visions of a post-human world are ideological fantasies obfuscating the abyss that awaits us. Needless to say, it would take more than a six-month pause to ensure that humans do not become irrelevant, and their lives meaningless, in the not-too-distant future.

Slavoj Žižek, Professor of Philosophy at the European Graduate School, is International Director of the Birkbeck Institute for the Humanities at the University of London and the author, most recently, of "Heaven in Disorder" (OR Books, 2021).

Copyright: Project Syndicate, 2023.
www.project-syndicate.org

Comments

বাংলাদেশে গুমের ঘটনায় ভারতের সম্পৃক্ততা খুঁজে পেয়েছে কমিশন

কমিশন জানিয়েছে, আইনশৃঙ্খলা রক্ষাকারী বাহিনীর মধ্যে এ বিষয়ে একটি জোরালো ইঙ্গিত রয়েছে যে, কিছু বন্দি এখনো ভারতের জেলে থাকতে পারে।

৩৬ মিনিট আগে