Top 11 AI in Software Testing Podcasts
Discover key insights in the AI software testing podcast. Explore automation, test orchestration, and the future of AI in testing from industry experts.
Published on: October 28, 2025
AI is driving massive changes in software testing, especially in automation. Staying updated on these advancements is crucial to staying ahead in the industry. That's why I have curated a list of the top AI in Software Testing Podcasts, so you can easily access expert insights and stay informed on the latest trends shaping the future of testing.
Overview
AI in software testing podcasts explain how AI improves Software Quality Assurance using smarter test design, flakiness reduction, risk evaluation, and real-world adoption.
Top AI Testing Podcasts:
- AARRR… Are You Test-Ready for AI? (LambdaTest): It discusses AI-readiness in software testing using the AARRR framework to decide if AI fits your QA, with a practical approach.
- How Vibium Could Become the Selenium for AI Testing (TestGuild): It discusses AI-driven, model-based automation to cut flaky WebDriver tests and modernize orchestration
- AI Testing and Evaluation (Microsoft Research Podcast): Shares lessons from cybersecurity and genomics on AI testing, evaluation, and governance.
- AI Testing, Benchmarks, and Evals (Thoughtworks): This episode highlights the AI benchmarks, Test evaluations, and reliability for deploying GenAI systems safely.
- AI, Automation, and the Real Value of Testers (Daniel Knott): Covers how AI impacts testers and why domain knowledge remains crucial in AI-driven testing.
1. LambdaTest Podcast - AARRR… Are You Test-Ready for AI?

In this podcast, Lucy Suslova explores an important question: Do you really need AI in your QA process? Lucy highlights the AARRR framework, which can evaluate the real impact of AI on test automation, engineering productivity, and team collaboration.
Speakers:
- Guest: Lucy Suslova (Head of Quality Engineering Excellence, Intellias)
- Host: Sudhir Joshi (VP of Channels & Alliances, LambdaTest)
Why Listen?
There are plenty of reasons to tune into this podcast, but based on my experience, here are the top three reasons:
- Evaluate AI requirement for your team: Lucy uses the AARRR framework to help you understand if AI is truly required in your QA process.
- Real-world use cases: Learn how AI tools like LLMs are improving test script generation and automation.
- Practical insights on AI adoption: Learn how to evaluate your team’s readiness for AI and implement it in a structured way.

2. TestGuild - How Vibium Could Become the Selenium for AI Testing

In this episode of the TestGuild Automation Podcast, host Joe Colantonio sits down with Jason Huggins, the creator of Selenium, Appium, and Sauce Labs, to discuss Vibium, a groundbreaking project designed as the "Selenium for AI".
Vibium removes the limitations of current testing tools by using AI for smarter, model-based testing that overcomes WebDriver’s flaky tests and shape the future of automation.
Speakers:
- Guest: Jason Huggins (Creator of Selenium, Appium, Sauce Labs, and Vibium)
- Host: Joe Colantonio (TestGuild)
Listen On:


Why Listen?
- AI-Driven Testing: Jason Huggins explains how Vibium is leveraging AI to fix flaky WebDriver tests, offering smarter automation for the future.
- WebDriver BiDi: Get insights into WebDriver BiDi, the protocol designed to improve communication with browsers and how it impacts testing.
- Vibe Coding: Learn about the rise of “vibe coding”, a new term coined to describe the intersection of AI and test automation, and how it could reshape the landscape of testing.
3. Microsoft Research Podcast – AI Testing and Evaluation: Learnings from Science and Industry

In this episode of Microsoft Research Podcast, Kathleen Sullivan and Amanda Craig Deckard delve into the intersection of AI testing and governance. As AI advances rapidly, they explores key learnings from various fields like cybersecurity and genome editing.
Together, they discuss how these industries approach AI evaluation, risk assessment, and testing crucial for the future of AI development and responsible AI usage.
Speakers:
- Kathleen Sullivan (Senior Director of Strategy & Operations, Microsoft)
- Host: Joe Colantonio (TestGuild)
Listen On:

Why Listen?
Here’s why this episode is a must-listen:
- AI governance insights: Learn how Microsoft is working on AI governance frameworks, bringing together expertise from various industries like cybersecurity and genomics to ensure AI is used responsibly.
- Importance of AI testing: The episode breaks down why testing is essential for building trust in AI technologies, ensuring they are safe and reliable before deployment.
- Lessons from other industries: Understand how lessons from fields like genome editing and nanoscience can be applied to AI testing and help shape future AI policy.
4. AI Testing, Benchmarks, and Evals - Thoughtworks

In this episode of the Thoughtworks Technology Podcast, host Lilly Ryan is joined by Shayan Mohanty, Head of AI Research, and John Singleton, Program Manager at Thoughtworks AI Lab. Together, they explore the challenges of AI testing, the nuances of benchmarks and evals, and the crucial role of understanding reliability when deploying Generative AI systems.
Speakers:
- Guest: Shayan Mohanty (Head of AI Research, Thoughtworks) and John Singleton (Program Manager, Thoughtworks AI Lab)
- Host: Lilly Ryan (Thoughtworks)
Why Listen?
Here’s why you should tune in:
- AI Testing Explained: Learn the key differences between benchmarks, evals, and testing, and how each serves a unique purpose in AI projects.
- Insights from Experts: Hear from Shayan and John as they share their expertise from Thoughtworks AI Lab on the real-world challenges of AI testing.
- Understanding AI Benchmarks: Find out why AI benchmarks can be misleading and how to better evaluate model performance.
5. AI, Automation, and the Real Value of Testers – Daniel Knott

In this episode of Software Testing Unleashed, Daniel Knott, with almost 20 years of experience in software testing, explores how AI is reshaping the role of testers. Daniel, an expert in test automation and mobile app testing, runs a highly popular YouTube channel with over 150k subscribers, where he shares practical testing insights.
Speakers: Daniel Knott (Founder of Automation Testing Academy)
Listen On:


Why Listen?
Here’s why you should tune into this episode:
- Testers’ value in the AI era: Daniel explains why testers still struggle to demonstrate their value in a world of AI and automation tools.
- Challenges of AI-driven development: Discover how AI tools and low-code platforms are speeding up development but potentially sacrificing software quality.
- Future of QA: Learn how business knowledge and domain expertise will become even more essential for testers in an AI-powered world.
6. LambdaTest Podcast- Building AI-Driven Test Automation Frameworks for QA Excellence

In this episode of the LambdaTest XP Series, Saurabh Mitra (Vice President & Head of Global Testing at Ramco Systems) explores how AI is transforming test automation. He discusses the benefits AI brings to test coverage, efficiency, and reliability, while sharing practical steps for successful AI adoption in QA.
Speakers: Saurabh Mitra (Vice President & Head of Global Testing at Ramco Systems)
Why Listen?
Here’s why you should tune into this episode:
- Unlock the Power of AI in Test Automation: Learn how AI can enhance test coverage, reduce errors, and optimize the overall efficiency of your testing processes.
- Practical Insights on AI Implementation: Discover key steps for successfully adopting AI in QA, including overcoming common challenges in AI-driven test automation.
7. The Future of QA: Testers vs. AI - Who Wins? - CYDEO

In this podcast, the evolution of software testing has been discussed, particularly the impact of Generative AI on Quality Assurance (QA). The discussion delves into the rise of AI-powered tools, the challenges posed to software quality, and the increasing demand for QA professionals who can adapt to AI-driven changes.
Listen On:

Why Listen?
Here’s why you should check out this podcast:
- AI’s impact on QA productivity: Discover how Generative AI can boost development speed but also compromise quality, making testing even more essential.
- Rise of AI testing roles: AI is creating new job opportunities in testing, as professionals are needed to ensure AI models are ethical, safe, and reliable.
- Future-proof your QA career: As software moves toward AI agents and customization, learn how QA professionals need to acquire new skills and focus on user experience and complex scenarios.
8. AB Testing #218 - The Principles of AI-Assisted Coding

In this episode of the AB Testing Podcast, Alan Page and Brent Jensen dive into the world of "Modern Testing," questioning whether it’s really as modern as it seems. Through engaging stories and discussions, they explore the intersection of testing with broader practices like Agile, Lean, DevOps, and leadership.
Speakers:
- Alan Page (Software Engineering Leader)
- Brent Jensen (Software Engineer and Consultant)
Listen On:

Why Listen?
- Reevaluating Modern Testing: Alan and Brent challenge the assumptions behind "Modern Testing" and explore why it may not be as modern as it’s marketed to be, offering an insightful critique of current practices.
- Real-World Insights Across Disciplines: They discuss how modern testing practices interact with Agile, Lean, DevOps, and leadership, providing listeners with cross-discipline insights that are relevant to any software professional.
9. Adam Sandman on Generative AI and the Future of Software Testing

In this episode of InfoQ Engineering Culture, Shane Hastie talks with Adam Sandman about how generative AI is reshaping software development and testing. They explore AI’s role in automating mundane tasks, accelerating prototyping, and transforming traditional software roles, while highlighting challenges like defects, ethics, and human-AI collaboration.
Speakers:
- Guest: Adam Sandman (Founder and CEO of Inflectra Corporation)
- Host: Shane Hastie (Lead Editor, Culture & Methods, InfoQ)
Listen On:


Why Listen?
- Practical AI Insights: Learn how generative AI is being applied in real software testing scenarios from unit test generation and synthetic data creation to UI/API testing while understanding its limitations and the critical need for human oversight.
- Future of Software Roles: Discover how AI is collapsing traditional developer, tester, and analyst roles into broader generalist positions and what skills professionals will need to thrive in this evolving landscape.
10. The Challenge of AI Model Evaluations with Ankur Goyal

In this episode, Ankur Goyal, Founder and CEO of Braintrust Data, discusses the evolving landscape of AI evaluation, focusing on the unique challenges of assessing LLMs and the importance of building robust evaluation tools for AI development.
Speakers: Ankur Goyal (CEO of Braintrust Data)
Listen On:

Why Listen?
- Uncover the Challenges in Testing AI Models: Ever wondered why testing LLMs is so tricky? Ankur sheds light on the complexities of non-deterministic AI, something most software testers might not have encountered before.
- Learn How Evals Are Revolutionizing AI Testing: This episode shows you why evals are a game-changer in evaluating generative AI models.
- Hidden Dangers of Poor Data in AI Testing: Inaccurate or poor data can lead to major issues in AI model performance. This podcast explains how choosing the right data can elevate your testing strategy for AI applications.
11. SE Radio 674: Vilhelm von Ehrenheim on Autonomous Testing

It discusses how agent-style systems generate, run, and maintain tests with minimal input and where human oversight still matters. It discusses fit with CI/CD, how to evaluate risk and set governance, and what to track for trustworthy results.
Speakers:
- Guest: Vilhelm von Ehrenheim (co-founder and chief AI officer of QA.tech)
- Host: Brijesh Ammanath
Why Listen?
- Agents in Practice: It discusses how AI agents can be used to plan tests, extract and analyse data, execute tests, self-heal bugs, and where human intervention is required.
- Pipeline Fit: It discusses the CI/CD integration, handling and reducing test flakiness, and what the “semi-autonomous” word really means in software testing today.
- Readiness & Guardrails: It discusses the prerequisites, risk evaluation, and metrics to prove value.
Conclusion
AI is transforming software testing, making automation smarter, faster, and more reliable. Listening to industry-focused podcasts helps testers and QA professionals stay updated on trends, real-world applications, and best practices.
These insights guide teams in implementing AI-driven testing effectively, reducing errors, and improving efficiency. By exploring expert perspectives, testers can enhance their skills, adopt new tools confidently, and prepare for the future of AI-powered QA.