EU countries approve technical details of AI Act


EU countries today agreed on the technical details of the AI ​​Act, the world’s first attempt to regulate the technology using a risk-based approach, following a political agreement in December. The green light from EU lawmakers is now needed before the rules come into force.


Whether or not an agreement would be reached today remained uncertain until the end.

France in particular has been skeptical about the regulation of so-called foundation models such as ChatGPT; the country opposed any binding obligations on suppliers of such models. It also expressed reservations about transparency requirements and trade secrets, but at today’s meeting of EU ambassadors the text was approved unanimously.


The European Commission’s risk-based approach to AI was generally welcomed in 2021, when the regulation was first presented, but came under pressure in late 2022, when OpenAI launched ChatGPT and triggered a global debate on chatbots.

The European Parliament added a new article with an extensive list of obligations to ensure that these systems respect fundamental rights as the EU Executive’s plan did not include provisions for foundation models.

In response, Germany, France and Italy put forward a counter-proposal favoring “mandatory self-regulation through codes of conduct” for foundation models.

After today’s approval, the European Parliament will most likely vote in the Internal Market and Civil Liberties committees in mid-February and in plenary in March or April. Subsequently, the law is expected to come into force by the end of the year and provides for an implementation period of up to 36 months. The requirements for AI models will begin to apply after one year.

The law divides AI systems into four main categories depending on the potential risk they pose to society.

Systems considered high risk will be subject to strict rules that will be applied before entering the EU market. Once available, they will be under the supervision of national authorities, supported by the AI ​​office within the European Commission.

Those that fall into the minimal risk category will be exempt from additional rules, while those labeled as limited risk will have to follow basic transparency obligations.

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

gaziantep bayan escort antep escort