How can we develop AI systems that are more respectful, ethical, and sustainable? - Highlights - DR. SASHA LUCCIONI

"My TED Talk and work are really about figuring out how, right now, AI is using resources like energy and emitting greenhouse gases and how it's using our data without our consent. I feel that if we develop AI systems that are more respectful, ethical, and sustainable, we can help future generations so that AI will be less of a risk to society.  And so really, artificial intelligence is not artificial. It's human intelligence that was memorized by the model that was kind of hoovered up, absorbed by these AI models. And now it's getting regurgitated back at us. And we're like, wow, ChatGPT is so smart! But how many thousands of human hours were needed in order to make ChatGPT so smart? The US Executive Order on AI still does need a lot of operationalization by different parts of the government. Especially, with the EU and their AI Act, we have this signal that's top down, but now people have to figure out how we legislate, enforce, measure, and evaluate? So, there are a lot of problems that haven't been solved because we don't have standards or legal precedent for AI. So I think that we're really in this kind of intermediate phase and scrambling to try to figure out how to put this into action.”

Om Podcasten

What are the dangers, risks, and opportunities of AI? What role can we play in designing the future we want to live in? With the rise of automation, what is the future of work? We talk to experts about the roles government, organizations, and individuals can play to make sure powerful technologies truly make the world a better place–for everyone. Conversations with futurists, philosophers, AI experts, scientists, humanists, activists, technologists, policymakers, engineers, science fiction authors, lawyers, designers, artists, among others. The interviews are hosted by founder and creative educator Mia Funk with the participation of students, universities, and collaborators from around the world.