164 - The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge

Are you prepared for the hidden UX taxes that AI and LLM features might be imposing on your B2B customers—without your knowledge? Are you certain that your AI product or features are truly delivering value, or are there unseen taxes that are working against your users and your product / business? In this episode, I’m delving into some of UX challenges that I think need to be addressed when implementing LLM and AI features into B2B products.   While AI seems to offer the change for significantly enhanced productivity, it also introduces a new layer of complexity for UX design. This complexity is not limited to the challenges of designing in a probabilistic medium (i.e. ML/AI), but also in being able to define what “quality” means. When the product team does not have a shared understanding of what a measurably better UX outcome means, improved sales and user adoption are less likely to follow.    I’ll also discuss aspects of designing for AI that may be invisible on the surface. How might AI-powered products change the work of B2B users? What are some of the traps I see some startup clients and founders I advise in MIT’s Sandbox venture fund fall into?   If you’re a product leader in B2B / enterprise software and want to make sure your AI capabilities don’t end up creating more damage than value for users,  this episode will help!     Highlights/ Skip to    Improving your AI model accuracy improves outputs—but customers only care about outcomes (4:02) AI-driven productivity gains also put the customer’s “next problem” into their face sooner. Are you addressing the most urgent problem they now have—or used to have? (7:35) Products that win will combine AI with tastefully designed deterministic-software—because doing everything for everyone well is impossible and most models alone aren’t products (12:55) Just because your AI app or LLM feature can do ”X” doesn't mean people will want it or change their behavior (16:26) AI Agents sound great—but there is a human UX too, and it must enable trust and intervention at the right times (22:14) Not overheard from customers: “I would buy this/use this if it had AI” (26:52) Adaptive UIs sound like they’ll solve everything—but to reduce friction, they need to adapt to the person, not just the format of model outputs (30:20) Introducing AI introduces more states and scenarios that your product may need to support that may not be obvious right away (37:56)   Quotes from Today’s Episode Product leaders have to decide how much effort and resources you should put into model improvements versus improving a user’s experience. Obviously, model quality is important in certain contexts and regulated industries, but when GenAI errors and confabulations are lower risk to the user (i.e. they create minor friction or inconveniences), the broader user experience that you facilitate might be what is actually determining the true value of your AI features or product. Model accuracy alone is not going to necessarily lead to happier users or increased adoption. ML models can be quantifiably tested for accuracy with structured tests, but because they’re easier to test for quality vs. something like UX doesn’t mean users value these improvements more. The product will stand a better chance of creating business value when it is clearly demonstrating it is improving your users’ lives. (5:25) When designing AI agents, there is still a human UX - a beneficiary - in the loop. They have an experience, whether you designed it with intention or not. How much transparency needs to be given to users when an agent does work for them? Should users be able to intervene when the AI is doing this type of work?  Handling errors is something we do in all software, but what about retraining and learning so that the future user experiences is better? Is the system learning anything while it’s going through this—and can I tell if it’s learning what I want/need it to learn? What about humans in the loop who might inte

Om Podcasten

Are you an enterprise data or product leader seeking to increase the user adoption and business value of your ML/AI and analytical data products? While it is easier than ever to create ML and analytics from a technology perspective, do you find that getting users to use, buyers to buy, and stakeholders to make informed decisions with data remains challenging? If you lead an enterprise data team, have you heard that a ”data product” approach can help—but you’re not sure what that means, or whether software product management and UX design principles can really change consumption of ML and analytics? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting product designer’s perspective on why simply creating ML models and analytics dashboards aren’t sufficient to routinely produce outcomes for your users, customers, and stakeholders. My goal is to help you design more useful, usable, and delightful data products by better understanding your users, customers, and business sponsor’s needs. After all, you can’t produce business value with data if the humans in the loop can’t or won’t use your solutions. Every 2 weeks, I release solo episodes and interviews with chief data officers, data product management leaders, and top UX design and research professionals working at the intersection of ML/AI, analytics, design and product—and now, I’m inviting you to join the #ExperiencingData listenership. Transcripts, 1-page summaries and quotes available at: https://designingforanalytics.com/ed ABOUT THE HOST Brian T. O’Neill is the Founder and Principal of Designing for Analytics, an independent consultancy helping technology leaders turn their data into valuable data products. He is also the founder of The Data Product Leadership Community. For over 25 years, he has worked with companies including DellEMC, Tripadvisor, Fidelity, NetApp, Roche, Abbvie, and several SAAS startups. He has spoken internationally, giving talks at O’Reilly Strata, Enterprise Data World, the International Institute for Analytics Symposium, Predictive Analytics World, and Boston College. Brian also hosts the highly-rated podcast Experiencing Data, advises students in MIT’s Sandbox Innovation Fund and has been published by O’Reilly Media. He is also a professional percussionist who has backed up artists like The Who and Donna Summer, and he’s graced the stages of Carnegie Hall and The Kennedy Center. Subscribe to Brian’s Insights mailing list at https://designingforanalytics.com/list.