EU regulators pass the planet’s first sweeping AI regulations


There is a European Parliament Broad legislation to regulate AI nearly three years after draft regulations . Officials in December. On Wednesday, members of parliament approved the artificial intelligence law by 523 votes to 46, with 49 abstentions.

The EU says the rules seek to “protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk artificial intelligence, while boosting innovation and establishing Europe as a leader in this field.” The act sets out obligations for AI applications based on potential risks and impacts.

The legislation has not yet become law. It is still subject to legal-linguistic checks, while the Council of Europe has yet to formally implement it. But the AI ​​Act is likely to come into effect by the end of the legislature ahead of the next parliamentary elections in early June.

Most provisions will take effect 24 months after the AI ​​law becomes law, but bans on prohibited applications will apply six months later. The EU prohibits practices it believes will threaten citizens’ rights. “Biometric classification systems based on sensitive features”, the “indiscriminate scraping” of facial images from CCTV footage and the internet to create facial recognition databases will be outlawed. Clearview AI’s performance will fall into this category.

Other apps to be banned include ; emotion recognition in schools and workplaces; and “AI that manipulates human behavior or exploits human weaknesses.” Some aspects of predictive policing will be banned, ie if it is based entirely on assessing someone’s characteristics (eg inferring their sexual orientation or political views) or profiling them. Although the AI ​​law largely prohibits the use of biometric identification systems by law enforcement agencies, it will be allowed with prior authorization in certain circumstances, such as to help locate a missing person or prevent a terrorist attack.

Applications considered high-risk, including the use of artificial intelligence in law enforcement and healthcare . They must be non-discriminatory and must respect privacy rules. Developers must demonstrate that systems are transparent, secure and explainable to users. As for AI systems that the EU considers low-risk (such as spam filters), developers must still inform users that they are interacting with AI-generated content.

There are some rules in the law when it comes to generative AI and manipulated media. Deepfakes and any other AI-generated images, videos, and audios must be clearly labeled. AI models will also have to respect copyright laws. “Rightholders may reserve rights over works or other subject matter to prevent the extraction of text and data unless it is done for scientific research purposes,” the text of the AI ​​Act reads. “Where appropriate opt-out rights are expressly reserved, providers of general-purpose AI models must seek permission from rights holders if they wish to access text and data on such works.” However, AI models built purely for research, development and prototyping are an exception.

Most powerful general-purpose and generative AI models (trained using more than 10^25 FLOPs of total computing power) under the rules. The threshold can be adjusted over time, but OpenAI’s GPT-4 and DeepMind’s Gemini are believed to fall into this category.

Providers of such models will need to assess and mitigate risks, report serious incidents, provide details on their systems’ energy consumption, ensure they meet cybersecurity standards, and conduct state-of-the-art testing and model evaluations.

Whom other EU regulations targeting the technology, the penalties for violating the provisions of the AI ​​Act can be severe. Violating companies will face fines of up to 35 million euros ($51.6 million) or seven percent of their global annual profits, whichever is higher.

The AI ​​Act applies to any model operating in the EU, so US-based AI providers must comply with it at least in Europe. Sam Altman, CEO of OpenAI creator OpenAI, suggested last May that his company might pull out of Europe if the AI ​​bill were to become law, but the company had no such plan.

To implement the law, each member state will create its own AI supervisory authority, and the European Commission will create an AI Office. It will develop methods to evaluate models and monitor risks in general purpose models. Providers of general-purpose models deemed to carry systemic risks will be asked to work with the office to develop codes of conduct.

This article contains affiliate links; we may earn a commission if you click on such a link and make a purchase.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *