The EU AI Act Newsletter #69: Big Tech and AI Standards

Corporate Europe Observatory has published an overview arguing that European AI standard-setting bodies are heavily dominated by tech industry representatives. Analysis Feedback on the second Code of Practice draft: Leverhulme Centre for Future Intelligence academics reviewed the second draft of the Code of Practice for General-Purpose AI and acknowledged significant improvements from the first version. The authors praise the balanced approach between Commitments, Measures and KPIs in establishing governance for general-purpose AI models. Their recommendations for the new draft focus on the following areas: increased attention to inference-time considerations, development of a more adaptable tiered system of obligations, stronger external assessment requirements and refinement of the framework for capabilities, propensities and context. The document also suggests methodological improvements to risk assessment, including changes to the taxonomy and more thorough AI evaluations of risk sources, alongside complementary capability-based evaluations with concrete outcome metrics and vulnerability assessments. Specific recommendations include [...] The original text contained 1 image which was described by AI. --- First published: January 20th, 2025 Source: https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-69-big-tech --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Om Podcasten

Up-to-date developments and analyses of the EU AI Act. Narrations of the “EU AI Act Newsletter”, a biweekly newsletter by Risto Uuk and The Future of Life Institute. ABOUT US The Future of Life Institute (FLI) is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grantmaking, educational outreach, and policy engagement. Our EU transparency register number is 787064543128-10. In Europe, FLI has two key priorities: i) promote the beneficial development of artificial intelligence and ii) regulate lethal autonomous weapons. FLI works closely with leading AI developers to prepare its policy positions, funds research through recurring grant programs and regularly organises global AI conferences. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap.