Using the Smartest AI to Rate Other AI

In this episode, I walk through a Fabric Pattern that assesses how well a given model does on a task relative to humans. This system uses your smartest AI model to evaluate the performance of other AIs—by scoring them across a range of tasks and comparing them to human intelligence levels. I talk about: 1. Using One AI to Evaluate AnotherThe core idea is simple: use your most capable model (like Claude 3 Opus or GPT-4) to judge the outputs of another model (like GPT-3.5 or Haiku) against a task and input. This gives you a way to benchmark quality without manual review. 2. A Human-Centric Grading SystemModels are scored on a human scale—from “uneducated” and “high school” up to “PhD” and “world-class human.” Stronger models consistently rate higher, while weaker ones rank lower—just as expected. 3. Custom Prompts That Push for Deeper EvaluationThe rating prompt includes instructions to emulate a 16,000+ dimensional scoring system, using expert-level heuristics and attention to nuance. The system also asks the evaluator to describe what would have been required to score higher, making this a meta-feedback loop for improving future performance. Note: This episode was recorded a few months ago, so the AI models mentioned may not be the latest—but the framework and methodology still work perfectly with current models. Subscribe to the newsletter at:https://danielmiessler.com/subscribe Join the UL community at:https://danielmiessler.com/upgrade Follow on X:https://x.com/danielmiessler Follow on LinkedIn:https://www.linkedin.com/in/danielmiessler See you in the next one!Become a Member: https://danielmiessler.com/upgradeSee omnystudio.com/listener for privacy information.

Om Podcasten

Unsupervised Learning is about ideas and trends in Cybersecurity, National Security, AI, Technology, and Culture—and how best to upgrade ourselves to be ready for what's coming.