The EU AI Act Newsletter #56: General-Purpose AI Rules

Members of the European Parliament sent a letter to the AI Office asking for greater inclusion of civil society and other stakeholders in the drafting of codes of practice for general-purpose AI. Legislative Process MEPs have questions about the codes of practice process: POLITICO's Morning Tech published a letter (unfortunately behind a paywall) by MEPs Brando Benifei, Svenja Hahn, Katerina Konečná, Sergey Lagodinsky, Kim Van Sparrentak, Axel Voss and Kosma Złotowski urging the EU's AI Office to include civil society in the drafting of rules for powerful AI models. In the letter, they express concern that the Commission plans initially to involve only AI model providers, potentially allowing them to define these practices themselves. The MEPs argue this approach could undermine the development of a robust, globally influential code of practice for general-purpose AI models. They stress the importance of an inclusive process involving diverse voices from companies [...] ---Outline:(00:21) Legislative Process(03:37) Analyses--- First published: July 8th, 2024 Source: https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-56-general --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Om Podcasten

Up-to-date developments and analyses of the EU AI Act. Narrations of the “EU AI Act Newsletter”, a biweekly newsletter by Risto Uuk and The Future of Life Institute. ABOUT US The Future of Life Institute (FLI) is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grantmaking, educational outreach, and policy engagement. Our EU transparency register number is 787064543128-10. In Europe, FLI has two key priorities: i) promote the beneficial development of artificial intelligence and ii) regulate lethal autonomous weapons. FLI works closely with leading AI developers to prepare its policy positions, funds research through recurring grant programs and regularly organises global AI conferences. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap.