Should AGI Really Be the Goal of Artificial Intelligence Research?

The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term. Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:Eryk Salvaggio, a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow;Borhane Blili-Hamelin, an independent AI researcher and currently a data scientist at the Canadian bank TD; andMargaret Mitchell, chief ethics scientist at Hugging Face.

Om Podcasten

Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. The Sunday Show is its podcast. You can find us at https://techpolicy.press/, where you can join the newsletter.