The EU AI Act Newsletter #73: Scientific Panel Rules

The Commission has established rules for a new scientific advisory group of independent AI experts.. Legislative Process Adoption of implementing act for the scientific panel: The European Commission has adopted rules for a new scientific advisory group of independent AI experts. This panel, mandated by the AI Act, will support the AI Office and national authorities in implementing and enforcing the legislation. The advisory body will provide technical advice on enforcement matters and can alert the AI Office about potential risks from general-purpose AI models. The Commission has adopted an implementing act outlining the panel's establishment and operational procedures. The next step involves launching a call for expressions of interest to select suitable experts for this governance role. The final draft of the Code: The third and final draft of the General-Purpose AI Code of Practice has been published by independent experts, featuring a streamlined structure with refined [...] ---Outline:(00:34) Legislative Process(02:39) Analyses--- First published: March 17th, 2025 Source: https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-73-scientific --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Om Podcasten

Up-to-date developments and analyses of the EU AI Act. Narrations of the “EU AI Act Newsletter”, a biweekly newsletter by Risto Uuk and The Future of Life Institute. ABOUT US The Future of Life Institute (FLI) is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grantmaking, educational outreach, and policy engagement. Our EU transparency register number is 787064543128-10. In Europe, FLI has two key priorities: i) promote the beneficial development of artificial intelligence and ii) regulate lethal autonomous weapons. FLI works closely with leading AI developers to prepare its policy positions, funds research through recurring grant programs and regularly organises global AI conferences. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap.