//Bing Web master tools

Ethical AI: Whose Fault Is It When AI Makes Mistakes? (webinar)

The discussion around artificial intelligence (AI) and the need for ethical AI has intensified with the advent of systems like ChatGPT. This has led to a deeper exploration of not only their technological capabilities but also their humanistic and ethical implications. The Athics webinar “AI Readings for the Summer” hosted Professor Luca Mari, full professor of Measurement Science at the LIUC Università Cattaneo School of Industrial Engineering and author of the book “Dostoevsky’s Artificial Intelligence,” to address these crucial topics.

Ethical AI and Responsibility

With the increasing integration of AI into our daily lives, a fundamental question arises: whose responsibility is it when an artificial intelligence system makes a mistake? Or, more directly, whose fault is it? How can we ensure ethical AI? This is one of the central questions addressed by Professor Mari, who defines it as “probably the most important topic” in his book. 

Traditionally, human beings have been defined as “the animal endowed with speech” (zoon logon echon), but today there are non-living entities that possess this capability.  This raises a crucial question: how do we remain different from machines? The answer, according to Mari, lies in responsibility. 

Unlike human beings, AIs are not alive in the biological sense: they do not have a metabolism, they do not die, and they can be perfectly cloned. This lack of life means they do not operate in a context of “scarce resources” like time, which for human beings is precious and finite. The awareness of this finitude is what drives us to justify the use of our time and, ultimately, to develop a sense of responsibility and the “meaning of life.” 

AIso, despite being active, autonomous, and capable of making even critical decisions, are not animated by this same existential awareness.  Ethical AI exists only if human beings train machines ethically. When software is turned off and on again, it returns to its identical previous state, unlike a human being. Therefore, the responsibility for what AIs do always falls on the human being who designs them, uses them, or delegates tasks to them. “It will be less and less important who or what does things, and it will be more and more important who is responsible for it, who takes ownership, who signs off on it,” states Professor Mari. 

Ethical and Educational Challenges

The daily use of AI tools also raises important ethical and educational challenges. Some studies, albeit with limited samples, suggest that excessive dependence on these tools could have repercussions on our ability to learn, think, and remember. This “AI divide” is a significant concern, where those who know how to use AI well can achieve an enhancement of their abilities (“empowerment”), while others may fall behind. 

To address this challenge and that of ethical AI, Professor Mari emphasizes the importance of experimentation and the development of “general, conceptual interpretative frameworks” that can guide us in the appropriate use of AI. An example is Deci and Ryan’s Self-Determination Theory, which identifies three fundamental needs for human well-being and intrinsic motivation: competence, meaningful relationships, and autonomy. 

AI, when used well, can enhance these needs: it can increase our competence, improve our meaningful relationships with others, and boost our autonomy. Conversely, improper use can undermine these same dimensions, leading to a loss of competence (by delegating everything), isolation in relationships (by interacting only with AI), and a decrease in autonomy (loss of self-esteem). Schools and education, therefore, have a crucial role in rethinking their purpose, promoting the use of technology as a tool for the development of these objectives and ensuring 

We are living a Third Cultural Revolution

Professor Mari describes the AI era as a “third cultural revolution. The first two revolutions, in summary, led us to lose our cosmological centrality (with Copernicus) and our biological uniqueness (with Darwin). Today, with the emergence of AI capable of conversing and reasoning in ways we previously considered exclusively human, we are also losing our presumed cognitive uniqueness. Chatbots like ChatGPT are increasingly passing the Turing Test, making it difficult to distinguish whether one is interacting with a machine or a human being, but leaving us humans with the responsibility for ethical AI. 

These systems are defined as “cognitively alien” because, despite being the result of our training, we do not fully understand the reasons for their behavior. The errors they make are not “bugs” in the traditional sense, but rather “errors of opinion” or “mistaken opinions,” arising from the impossibility of guaranteeing perfect self-consistency in the vast datasets on which they are trained. This makes them similar to “adolescents” who, if not controlled, can deviate. 

Overcoming the Divide Between Humanistic and Technical-Scientific Culture

The book “Dostoevsky’s Artificial Intelligence” promotes the idea that AI, and chatbots in particular, can act as a bridge between humanistic and technical-scientific cultures. This artificial distinction between the “two cultures” hinders a holistic view of knowledge. Chatbots offer a strategic opportunity to show that “the two cultures are two sides of the same coin called good humanity or a good life, ensuring ethical AI.” 

To use these tools consciously, both humanistic skills (such as “prompt engineering” and, more recently, “context engineering”—the ability to ask appropriate questions and engineer the entire dialogue context) and a basic understanding of how they function internally (e.g., understanding concepts like neural network parameters or Retrieval Augmented Generation – RAG) are necessary. This allows users to become more aware and to seize the opportunities that technology offers to “speak natural language with technology.” 

Ethical AI in Conclusion

In conclusion, discussing ethical AI forces us to redefine what it means to be human in a world where machines can emulate our cognitive abilities. Responsibility emerges as the fundamental distinguishing trait that separates us from artificial “things.” The final invitation is to use these “cognitive superpowers” to become more aware of the value of our lives and to build a future where ethical AI is at our service, promoting a good life in harmony with others and the environment.