In the evolving landscape shaped by artificial intelligence, understanding the risks of AI—bias, hallucinations, lack of transparency, data vulnerabilities, and new ethical dilemmas—is essential.
These topics were at the center of the webinar “AI, Privacy and the Digital Future”, organized by Athics, featuring contributions from Avv. Maria Elena Iafolla (lawyer and DPO specializing in new technology law) and Prof. Riccardo Petricca (AI professor, DPO, and Innovation Manager).
The discussion highlighted the risks of AI and the crucial role of the Data Protection Officer (DPO) as a guide in balancing innovation and protection.
Regulatory Evolution and the AI Act: Mitigating the Risks of AI through Integrated Compliance
As Maria Elena Iafolla explained, the introduction of the European AI Act marks a turning point in AI regulation.
This regulation aims to ensure a harmonized and consistent approach within the European Union, establishing clear principles on transparency, security, and the protection of fundamental rights.
Alongside the AI Act, existing regulations such as GDPR, the NIS2 Directive, and the DORA Regulation for cybersecurity remain fully applicable.
The common thread, Iafolla emphasized, is the risk-based approach: no longer just a list of measures to adopt, but a precise assessment of the risks of AI and those related to each AI processing or application.
This means that compliance can no longer be fragmented (one for GDPR, one for the AI Act, one for cybersecurity) but must be integrated, dynamic, and sustainable.
From this perspective, the DPO becomes the beating heart of corporate governance—a bridge between innovation and legality.
Examples: Between Bias and Hallucinations
Bias and Algorithmic Discrimination Among the Risks of AI
“An AI system is not inherently neutral,” Maria Elena Iafolla noted during the webinar.
Generative models learn from data: if the data are incomplete, distorted, or culturally unbalanced, so will be the results.
Algorithmic bias is one of the most insidious risks of AI, as it can produce discriminatory decisions in sensitive sectors such as employment, finance, or healthcare.
For this reason, organizations must adopt an approach that combines ethical and technical assessments, with balanced datasets, periodic audits, and specific training on AI fairness and algorithmic accountability.
Hallucinations and False Content
Among the most evident risks of AI are hallucinations, when a model generates plausible but false information.
During the webinar, Iafolla cited a striking example: “legal documents drafted by AI citing non-existent rulings.”
This illustrates how the improper use of AI can compromise an organization’s credibility and reliability.
Hallucinations are not just technical errors but also reputational and legal risks: incorrect outputs can lead to legal consequences and loss of trust.
The solution? Constant human supervision and adoption of AI validation systems that allow the verification of model-generated information.
Privacy, Security, and Governance: The Fundamental Role of the DPO
Generative AI “lives” on data, but every piece of data processed also represents a potential vulnerability.
As Prof. Riccardo Petricca noted, “It is not enough for data to be processed legally; we need to understand how it is used and whether the algorithm introduces risks or discrimination.”
In his intervention, Petricca cited the case of a healthcare facility using AI for diagnostics: “We banned the use of common generative tools and imposed the adoption of a proprietary AI system with high levels of cybersecurity and privacy.”
A concrete example of a privacy-by-design approach.
Protecting data does not just mean complying with regulations; it also means building trust and learning to manage data ethically and consciously.
As highlighted during the webinar, the DPO is no longer the person who blocks innovation, but the one who helps it grow sustainably.
Conclusion: Privacy as a Lever for Innovation
The “AI, Privacy and the Digital Future” webinar concluded with a reflection capturing the core of the debate:
“Privacy is not a constraint but a lever we can use to innovate in the best possible way.”
Managing the risks of AI does not mean slowing down innovation but governing it consciously.
It means adopting an integrated approach to data governance, training people, carefully selecting technology partners, and above all, building trust between users, companies, and technologies.The DPO remains a key figure in this balance: a professional capable of combining law, ethics, and technology, transforming data protection into a competitive asset.
As Prof. Petricca concluded: “Transparency, reality, and honesty are the foundation for a sustainable digital future.”