A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave. Read the story here.

Om Podcasten

Get in-depth coverage of current and future trends in technology, and how they are shaping business, entertainment, communications, science, politics, and society.