It will be the world’s first legal intelligence regulation. The intense negotiations of the agreement that took place last December between the Council and the Member States of the European Parliament, determined a provisional agreement.
A draft regulation whose objective is to guarantee AI regulations in the European market and reinforce the security and respect for fundamental rights and values of the systems used in the EU.
This is a proposal that is a first step and marks a historical milestone since according to European authorities its purpose is to stimulate investment and innovation in AI in Europe.
The ratification of said agreement by both parties does not establish a firm date for the law, but it does set a deadline for application, specifically two years after it enters into force, a fact that confirms that the regulation on AI has no return. back.
However, the reality parameters of this future AI standard in Europe raise numerous questions and a climate of uncertainty for the Tech business sector. What will happen once the law comes into force? The truth is that its practice is still unknown, and it is difficult to even imagine what the application of these rules will be like.
The Deal Jam: Biometric Surveillance and the Control of Tools like ChatGPT
One of the moments that cemented the agreement process has been with two of the aspects that raise the most controversy. The use of facial recognition cameras or biometric surveillance in public spaces, and the use, which is already a great reality in practice, of tools or applications such as chatGPT or DALL-E.
Parliament wanted them to be prohibited and for their part, the governments have exerted maximum pressure until they have achieved that the security forces can use them in certain justified cases.
Classification scale for the IA law
The aim of the European Union is for all products with artificial intelligence to be classified taking into account a minimum, limited, high risk or unacceptable level of risk. And in this way and with these precepts, the law will be more or less permissive.
Mild or minimal will be, for example, revealing that content has been generated with AI so that users can make decisions about its subsequent use. The most restrictive groups are directly related to,
- Cognitive manipulation and some objects that influence the education and behaviour of vulnerable groups such as children, disabilities or social or economic situations. Some aspects such as cognitive-behavioural manipulation are prohibited.
- The non-selective removal of facial images on the Internet or CCTV images.
- The recognition of emotions at work or in educational institutions.
- Social scoring, categorization or biometric classification to infer sensitive data such as sexual orientation, beliefs, or certain cases of police surveillance for individuals.
For their part, tools such as ChatGPT present such a level and potential with the help and simplification of tasks, that to avoid numerous illegal content protected by copyright, some measures are proposed such as citing that certain content has been generated with the application.
However, in this sense, some European states have been more permissive, because the reality is that other economic interests take precedence, such as the fear that taking too many restrictions with tools of this type could allow the American giant to take over. regarding the development of AI.
The Regulation of AI in Europe is a Question for Tech Companies
The law provides for significant fines for violations ranging from 7.5 to 35 million euros, or 7% of the volume of the company that commits the violation.
Although this regulation will materialize in the short term, at most and most expected in 2026, for technology companies it raises a question. As always, we must adapt to this new legal scenario regarding artificial intelligence.