On this episode of the The Peel, host Shelley McGuire sits down with Austin Keller, Director of Data Science at IntelliDyne, to unpack some of the most misunderstood concepts in artificial intelligence. With a background that spans secure generative AI, Navy operational analytics, public health, and veteran suicide prevention, Austin brings both technical depth and real-world perspective to the conversation.
The episode dives into timely questions around AI reliability, including what “hallucinations” really mean in AI systems and why they occur. Shelley and Austin explore how techniques like retrieval-augmented generation help ground AI outputs in real, up-to-date information, and why simply deploying a model isn’t enough, especially in government and healthcare environments where accuracy matters.
Austin explains how AI can best support analysts and practitioners by summarizing, comparing, and organizing massive volumes of data, while still requiring human oversight, validation, and judgment. The conversation highlights where AI excels, where it needs guardrails, and why understanding how these systems work is critical to using them responsibly.
To connect with Austin, follow him on LinkedIn here.
Learn more about IntelliDyne by visiting their website here.
Click here to listen on Spotify.
If you liked listening to this episode, click here to view more in this series. To view all of our podcast series, visit our podcast page.
