EU Releases AI Safety Guidelines; Meta Declines Participation


Meta has chosen not to endorse the European Union’s newly established voluntary code of practice for artificial intelligence, as stated by Joel Kaplan, Meta’s Chief Global Affairs Officer, on LinkedIn. The AI code is part of the AI Act, which was enacted last year and focuses on large AI models. Companies have until August 2 next year to adhere to it, although the code of practice is still voluntary. Kaplan criticized the EU’s strategy, claiming it would obstruct AI innovation and economic advancement in Europe. The AI Act aims at models with systemic risks that impact public health, safety, rights, or society, including those from Meta, OpenAI, Google, and Anthropic. The European Commission has released new guidelines to assist AI firms in meeting the AI Act’s requirements, following the earlier release of the voluntary code of practice. Companies can opt to sign the code, which provides copyright protections, safety protocols, and transparency stipulations, potentially leading to enhanced legal safeguards. Failure to comply could incur penalties of up to 7% of yearly global revenue. Meta aligns with other U.S. firms like Mistral AI and Airbus in opposing the EU’s AI regulations due to worries about suppressing AI progression. In contrast, OpenAI has consented to endorse the EU’s Code of Practice for General Purpose AI, highlighting its dedication to delivering secure AI models for Europeans.

Disclosure: Ziff Davis, the parent company of Mashable, has initiated a lawsuit against OpenAI for alleged copyright violations.