ChatGPT’s New Parental Controls Are Broken By Design

OpenAI announced parental controls for ChatGPT on September 29 2025, but after testing them myself, I found critical flaws that make them ineffective for protecting children. In this video, I break down exactly what's wrong with the current system and what the parental controls should actually look like.Key Problems Covered:- Children can bypass controls by simply logging out- Defaults are not safe-by-default (backwards safety engineering)- Zero visibility for parents (not even conversation summaries)- Controls are not tamper-proof- System appears designed for PR, not actual child protectionWhat Real Controls Would Include:- Tamper-proof architecture- Safe-by-default settings- Tiered monitoring (metadata + summaries, not full surveillance)- Age-appropriate developmental stagesDo not rely on these controls. Kids should use ChatGPT only in shared spaces with ongoing parental engagement and clear family agreements.

Om Podcasten

Ben Gillenwater helps families protect children from digital dangers, bringing 30 years of cybersecurity expertise to the parenting journey. His background includes working with the NSA and serving as Chief Technologist of a $10 billion IT company, where he built global-scale systems and understood technology’s risks at every level. His mission began when he gave his young son an iPad with ”kid-safe” apps—only to discover inappropriate content days later. Despite his deep technical background, Ben realized that if protecting children online was challenging for him, it must be even more difficult for parents without his expertise. Through Family IT Guy, Ben creates videos and articles that help parents and kids learn how to leverage the positive parts of the internet while avoiding the dangerous and risky parts. His approach bridges the knowledge gap between complex technology and practical family protection, making digital safety accessible to everyone.