Luca Mari - Full Professor of measurement science with Università Carlo Cattaneo.
This article was published in HEADlight February 2025
Since the public release of OpenAI ChatGPT, at the end of November 2022, the things around Generative Artificial Intelligence (GenAI) have been changing at unexpected, rapid, and radical paces. This makes drawing a coherent framework to understand what has happened so far a complex task, and reliably predicting what could happen even in the next future a hopeless endeavour. There is a common feeling that the widespread adoption of GenAI systems is going to affect our society in many dimensions – geopolitics, sustainability, labour market, and so on –, but setting a well-grounded decision-making strategy, while more and more required, seems to be still mainly a matter of guesswork. Nevertheless, one key point can now be safely accepted: if very plausibly it will be an industrial revolution, already today it is a cultural revolution. And this because in our interaction with chatbots (i.e., “conversation machines”) it is the first time in history that we are having semantically rich and linguistically sophisticated verbal exchanges with entities which are not human beings.
A note is perhaps helpful here to reduce the risk of misunderstandings. The phrase “semantically rich and linguistically sophisticated verbal exchanges” in the sentence above does not imply assuming that such entities are “really” intelligent, intentional, sentient, and so on. I do not take a position on this subject here because I doubt that whether a chatbot is “really” intelligent, intentional, sentient, ... is really an important issue at this stage, and however because I doubt that everyone would agree upon the criteria to characterize as such also a human being.
Rather, our focus should remain on the paradigm shift at the basis of the current chatbots and their siblings. The received view is that the human- computer interaction has been mainly realized so far according to a principle stated by Ada Lovelace in 1842: the computer “can do whatever we know how to order it to perform”. Hence, computers – and more generally digital systems – have been conceived of as entities whose behaviour is the result of the explicit transfer of procedural information from human beings, i.e., as programmed machines, for short. But even a short consideration can convince us that no algorithmic/programming knowledge is available to produce the verbal exchanges we are doing with chatbots. How do they do what they do, then? A seminal answer comes from a paper published in 1950 by Alan Turing, who noticed that we are able to solve many problems while remaining unable to explain how we solve them. For example, and crucially, we are proficient in conversation, though we do not have any “conversation algorithm”. Turing commented thus: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain.”. It is the key difference between software developed to solve problems and software developed to learn how to solve problems, where the latter Turing himself called a “learning machine”.
Our high stakes are now clear, by considering the strategic role of education in a knowledge society and the foundational role the Western culture attributed to language (the Greek lògos) in defining the humanity of human beings. These days, really without any cognitive and psychological preparation (not counting sci-fi...) we are finding ourselves in a context in which the cohabitation with “conversation machines” (and more to come, of course) is a matter of fact: even before the fear of massive job losses, we have to discover new reasons for a new anthropocentrism.
We develop technology and its products to support us in problem solving, sometimes up to a point that we accept to be substituted by artificial agents (how many of us are still able to compute square roots without a digital device?). But at stake here is the technology of language, of reasoning, of thinking: accepting to be substituted by it would be the greatest tragedy of humanity; being empowered by it will lead us to a better society, in which GenAI “doesn’t replace human labor and human agency but rather amplifies human abilities and human flourishing. Of course, this way of thinking isn’t a given. It’s a choice.” (R. Hoffman, Impromptu: Amplifying our humanity through AI, 2023).
If not now, when? And if not the school, what?