In a year that is already shaping up to be “revolutionary” on issues of Generative Artificial Intelligence and its multi-modal application, the European Union seems to have entered a race against the clock, due to the complications generated by the arrival of ChatGPT and other developments that have led to reconsiderations for the assembly of the expected “AI Law”, an ambitious project that seeks to protect citizens against possible risks associated with emerging technology.
According to Reuters, a consensus was expected among lawmakers for the approval of a bill that implements clear rules on the use of Artificial Intelligence in Europe and which has been in the making for two years. The media reports that some debates revealed disagreement among representatives on several aspects of the law.
The work teams associated with this bill have submitted more than 3 thousand amendments that seek to cover all instances of AI development, ranging from an office dedicated to AI to the scope of this project, which already accumulates 108 pages.
For specialists, the knot that prevents resolving the issue more quickly centers on the balance between encouraging innovation and protecting citizens’ rights. Based on this scale, the level of “perceived risk” is determined, establishing minimum, limited, high and unacceptable levels. Each level establishes a level of transparency commensurate with the risk.
According to MEPs, this future law will be subject to revisions on an ongoing basis, depending on changes in the industry. However, haste is generated due to the closing of the legislative period in 2024, the year in which European elections will be held.
AI law in Europe: what you should know
This project seeks to include companies that provide an AI-based product or service within Europe. This comprises systems that generate content, predictions or recommendations and that can influence certain environments. This set of rules will work in conjunction with other associated laws, such as the General Data Protection Regulation (GDPR).
The law encompasses a private and public use of AI, considering the level of interaction it has with citizens. In case an AI is used for surveillance purposes, the developer company will be bound to higher levels of transparency. However, the implementation of AI technologies categorized as “unacceptable” is prohibited.
Within the draft law also considers a series of fines, in case some companies do not comply with the regulation. The amount can go up to 30 million euros or 6% of the global profit, depending on which is higher.
ChatGPT and the AI Law where does it fit in?
For new technologies based on Artificial Intelligence such as recent developments under Generative Pre-trained Transformers (GPT) such as ChatGPT and others.the legislative initiative considers the category “GPAIS” or General Purpose AI System. Part of the debate centers on labeling similar technologies as “High Risk” and what this would mean for companies wanting to add this to their own products.
For example, the inclusion of generative AI for customer care, for example, must go through an EU-designated transparency process and worked in conjunction with GDPR and other laws associated with technology practices and citizens’ rights.
The draft law is still under discussion, until it achieves consensus among parliamentarians and can move to a trilogue between representatives of the European Parliament, the Council of the European Union and the European Commission. After a possible approval, a grace period of up to two years is generated for companies to adapt to the regulatory framework.