Thursday, European Union has been published first draft of the Code of Practice for general purpose AI (GPAI) models. The document, which will not be finalized until May, provides guidelines for managing risks and gives companies a plan to avoid serious penalties. The The EU AI Act has entered into force on August 1, but left room for lowering the specifics of the GPAI rules. This project (via TechCrunch) is a first attempt to clarify what is expected of more advanced models, giving stakeholders time to provide feedback and improve them before launch.
GPAIs are trained with a total computing power of more than 10²⁵ FLOPs. Companies expected to comply with the EU guidelines include OpenAI, Google, Meta, Anthropic and Mistral. But this list can grow.
The document addresses several key areas for GPAI producers: transparency, copyright compliance, risk assessment, and technical/management risk mitigation. This 36-page draft covers a lot of ground (and will probably cover more before it’s finalized), but a few points stand out.
The code emphasizes transparency in AI development and requires AI companies to disclose the web browsers they use to train their models. copyright holders and creators. The risk assessment section aims to prevent cybercrime, widespread discrimination, and the loss of control over AI (“it went rogue” in a million bad science fiction movies).
AI vendors are expected to adopt a Safety and Security Framework (SSF) to decompose their risk management policies and reduce them in proportion to system risks. The regulations also cover technical areas such as protecting model data, ensuring secure access controls and continually reassessing their effectiveness. Finally, the management unit strives for accountability within companies, requiring ongoing risk assessments and the involvement of outside experts when necessary.
as Other EU rules related to technologybad performing companies AI Law may face severe penalties. They can be fined up to 35 million euros (currently $36.8 million) or seven percent of their global annual profits, whichever is higher.
Stakeholders are invited to provide feedback through a dedicated Futurium platform until November 28 to help improve the next project. The regulations are expected to be finalized by May 1, 2025.