According to the latest McKinsey report “The economic potential of generative AI”, the value of generative AI can be estimated between 2.6 trillion dollars up to a maximum of 4.4 trillion dollars a year: the equivalent of 4.4% of world production.

According to McKinsey, the development of generative AI would have a significant impact on the production of various industrial sectors (including banks, high tech, biotechnology) and would help increase labor productivity, partially offsetting the decrease in employment and contributing to economic growth .

In particular, 4 company functions will benefit from the increase in terms of productivity, favored by automation: Customer Operations, Marketing and Sales, Software Engineering and Research and Development.

In the Customer Operations area, for example, generative AI could reduce the handling of assistance requests by agents by 50%, while in the Sales and Marketing area, the contribution of generative AI in content creation would help increase productivity between 5 and 15%.

In the field of Software Engineering, the help of Generative AI in code development, correction and re-factoring activities would increase productivity between 20% and 45%.

However, the growing sophistication and incredible market potential of generative AI does not exempt this technology from some limitations.


The IDC research “Generative AI in EMEA: Opportunities, Risks, and Futures” identifies three main aspects of generative AI in which challenges and risks are concentrated: the intrinsic capabilities (and limitations) of systems, the processes by which they are created and the ways where they are potentially used.

For example, in terms of “Capabilities” the risks associated with the “hallucinations” of AI, bias and production of generic content systems are identified, while in terms of use, exposure to the risk of plagiarism, disinformation, fraud and unplanned behaviors.

 IDC “Generative AI in EMEA: Opportunities, Risks, and Futures”
Source: IDC “Generative AI in EMEA: Opportunities, Risks, and Futures”

the importance of control

We recently read of the misadventure of a New York lawyer who entrusted ChatGPT with the search for the sentences in support of his case, without bothering to verify their reliability.

Indeed, it is true that AI based on large linguistic models “knows how to speak, but does not know what it is saying”.

In technical jargon we speak of “hallucinations”, when the AI ​​returns distorted or completely invented information, when it cannot find answers.

This is why it is important to maintain human control not only when training models, but also when verifying the generated output.

Among the main limitations of ChatGPT, for example, we find:

  • The risk of bias: ChatGPT is trained on a large set of text data and that data may contain bias or bias. This means that AI can sometimes generate responses that are unintentionally biased or discriminatory.
  • The limited knowledge: Although ChatGPT has access to a large amount of information, it is not able to access all the knowledge that we humans have. You may not be able to answer questions about very specific or niche topics and may not be aware of recent developments or changes in certain fields.

However “risky” as it may be, the use of Large Language Models in business interactions will quickly become unavoidable.

What precautions to take?

According to Gartner, it is imperative that organizations adopt the following “best practices”:

  • Make sure people (employees, customers, citizens) are aware that they are interacting with a machine, by clearly labeling the entire conversation
  • Activate due diligence and technology auditing tools to track uncontrolled bias and other reliability issues in the use of LLM
  • Protect privacy and security by ensuring that sensitive data is not prompted or derived from the use of the LLM, or used as a learning data set outside the organisation.


Generative AI, represented by models such as ChatGPT, has opened up new perspectives in the field of natural language processing and content creation. However, it is essential to recognize its potential risks and current limitations. From lack of contextualization, to the risk of bias, manipulation and misinformation, generative AI presents significant challenges that require a responsible and informed approach.

On 14 June 2023, the European Parliament approved the amendments to the law on artificial intelligence of the European Union (EU) (AI Act or AIA), adopting an approach based on the perception of the “risk” associated with the use of technologies based on AI, especially in the following areas:

  • Emotion recognition
  • Acquisition of biometric data
  • Social scoring
  • Restrictions for Generative AI
  • AI Recommendation Systems

Some of the considerations underlying the AI ​​Act open questions and debates that will be addressed in the Council of the European Union and in the European Commission, before the bill becomes effective.

However, the need for regulation effectively sanctions the epochal change that artificial intelligence is generating in all fields.

Adopting safety measures, model training ethics, and becoming an informed digital citizen are all key to mitigating the risks associated with Generative AI and maximizing its benefits. Only through a conscious understanding of the current capabilities and limitations of this technology can we use it responsibly and contribute to a safer and more inclusive digital future.