XP Series Webinar

AARRR...Are you Test-Ready for AI? Discover If AI Can Transform QA Process

May 22nd, 2025

40 Mins

Watch Now

Listen On

prodcastspotifyMusicyouTube
Lucy Suslova

Lucy Suslova (Guest)

Head of Quality Engineering Excellence, Intellias

Lucy Suslova
Sudhir Joshi

Sudhir Joshi (Host)

VP of Channels & Alliances, LambdaTest

LambdaTest

The Full Transcript

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - Hello, everyone. Welcome to another podcast. My name is Sudhir, and I head alliances and channels at LambdaTest. AI is all the rage. Everyone is talking about it. But before you jump on the bandwagon, here is a question: “Do you really need AI in your QA process?”

According to Gartner, 70% of organizations are exploring AI for software testing. But the real question is, what exactly do you need it for? In this session, we'll guide you through the AARRR framework to assess whether AI is a right fit for your QA strategy. We'll explore how AI can help boost engineering productivity, streamline testing workflows, and elevate collaboration across teams.

We'll also share insight from how AI-powered tools have been enhancing test automation, driving faster releases, and helping teams stay ahead of the curve. By the end, hopefully, you'll have a clear picture of whether AI is a true asset for your QA processes or simply an overhype.

With that, we have our special guest today, Lucy. She's heading the engineering excellence at Intellias, where she leads a horizontal function that drives the growth and adoption of engineering best practices across the company's diverse center of excellence, including front-end and back-end development, quality engineering, cloud, system engineering, and beyond.

With over a decade of experience in the tech world, Lucy is passionate about helping the organization unlock its full potential through innovative practices and cutting-edge solutions. She's focused on making things better, faster, and smarter, whether that's implementing AI testing, scaling cloud infrastructure, or pushing the boundaries of software engineering. Lucy thrives on fostering strong cross-functional connections that fuel excellence, creativity, and technical innovation.

With that, a warm welcome to Lucy on the show. Lucy, anything before we get straight onto the questions from you?

Lucy Suslova (Head of Quality Engineering Excellence, Intellias) - Thank you very much, Sudhir, for such a nice introduction. It sounded really great. Thank you. And thank you for inviting and having me here. And I am really happy to join and discuss all the types of topics related to AI and key, and to share my thoughts with the global community.

And actually, yeah, you pretty covered everything, just what NAAD. I'm really passionate about technology, about everything that's related to AI, because AI is everywhere right now. When you're open, I don't know, LinkedIn, some tech blog, whatever, you're definitely going to see millions of mentions of AI.

And definitely, as like advocate for excellence within my company, I need to pay attention to that. And I need to find some ways how we can embed it, and so on and so forth. Related to my experience, I've tried myself, I would say, in different roles. So I was a business analyst, full-stack software developer, and test automation engineer.

And then finally, I found myself more in managerial stuff, like building excellence, right? And actually, I was deeply involved in building quality engineering processes for several companies. That's my background, yeah, and I think that's it.

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - Yeah, I mean, quite an impressive one as well as quite diversified, right? From a BA to all the way, call it software engineering to quality engineering, quite a journey. With that, Lucy, my first question is, how does the, in fact, what does AARRR framework stand for, and how does it help determine if AI is truly needed for QA, and what are the key criteria for decision making?

Lucy Suslova (Head of Quality Engineering Excellence, Intellias) - Yeah, that's actually a great question. Thank you for that. And let me give you a little bit of the background of my thinking about the framework, and actually what it is. And I've been thinking about AI for quite a long time now and tracking innovations, reading some, I don't know, research papers, trying out tools and observing how it's evolving across software engineering.

And at first, it was all hype, everyone was suddenly, I don't know, generating test cases with ChatGPT, writing code with copilots like GitHub Copilot or similar tools or playing with some no-code or local tools, promising, you know, like the rapid shift to autonomous quality assurance.

And it was exciting, definitely, but also a little bit overwhelming. But the hype really didn't last forever, quickly moved from the wow effect to the so what stage, right? What AI actually do in software development, how it is really applicable and how that's, and what the tangible value we can extract beyond just experimentation.

And this shifting mindset is a part of the broader transformation we're seeing, where AI is no longer just a buzzword; it's becoming fundamental. And this is especially true in quality assurance. And if we zoom in on quality assurance specifically, at some point, like at this service company I work for, began to, we began receiving more and more client inquiries, like, how can we enhance QA with AI? How, what does it mean to build an AI-infused or AI-native quality strategy?

And to structure our thinking and assess readiness, I turned to frameworks that help teams evaluate their transformation journey. But instead of the usual testing specific models like TMMI or something like that, I found that frameworks like the business model canvas or technology adoption lifecycle work better for this purpose.

And these models focus on the broader business and technological readiness aspects, such as team culture, existing processes and readiness for actually innovation, which I believe the critical when assessing how we can fit into specifically your key process. And that's actually, you know, that's a hard truth that in the world where we were even test automation that should be given in the world of continuous quality, it's not always there.

Like, let's be honest, right? And so, how do you assess whether your team is really ready for something bigger, for the introduction of AI to key in a structured and sustainable way? And actually, that's where I found this AARRR framework. And I found this framework to be surprisingly helpful.

And originally, coined by Dave McClure for growth hacking, our framework it's something not related to keywords. Anyway, if explaining the framework itself, it has this abbreviation, and R stands for acquisition, activation, retention, referral and revenue. So these are the key aspects that we are considering when measuring something, right?

But when we apply it to QA strategy, especially with the AI in the mix, it becomes a powerful way to move from theory to practice. And it helps to cut through the noise, prioritize real business impact and make grounded decisions. And just to clarify, when I'm saying AI in QA, I don't just mean someone, know, pasting requirements into ChatGPT and copying whatever comes out.

I mean, something more structured, right? Think embedding ready-to-use platforms that offer a real shift to autonomous testing or building custom QA agents trained on domain-specific data, leveraging LLMs to generate test scripts, drive selective test executions and automate results, analysis and reporting, something like that. And it's about creating scalable, intelligent testing systems that augment human capability, not just replace it, right?

So the AARRR framework wasn't originally meant for QA. It's from the startup world focused on customer growth. But when we applied it to EI and QA, it suddenly made a ton of sense for me. If we break it down and let's take the first acquisition. Acquisition becomes something like that. How are we identifying test gaps or inefficiencies? Do we even need AI, or is it, or, for example, is basic test automation not in place yet?

So when we're pointing out these kinds of questions, we can better understand the problem, why we need to acquire this process tool or whatever. Then, activation is about the early value. Can AI actually help with something like log analysis or flaky test detection right now? So, it's about proof of concept. It's about low-hanging fruits where you can benefit, like, not in half a year or a year or so, but you can really see the results right now.

And retention asks, is this AI tool, for example, sticky? Are people coming back to use it? Or is it just a one-time, I don't know, wow demo, and that's it? So the point of analyzing retention to understand, really, it is used within not only one team or squad, but entirely in the landscape of the whole organization. But if we have these tools scaled for lots of teams, it makes a lot of sense. And then we can see the boost in the value.

And referral is, you can think about referral in this way, like, our teams are talking about it, sharing knowledge, advocating for using this tool or process, AI-driven, let's say, across all the projects. So it's also about scalability and it's also about, know, like, if you're happy with the tool you're using, you want to share it, you want to, you know, like, just say, Hey, I'm using, I don't know, Playwright, it's working great for me.

And you can try it and you can see the benefits here, here and here, something like that. And then your teammates, or maybe you're, you know, like, kind of an influencer in the QA world, and you can spread your opinions even outside your organization in terms of community, whatever. So if you're talking about this tool process, whatever it means, it really brings value.

And the last R in the framework it's about, it is revenue. But in our case, I would say it's more about the impact. Is where we ask, did this improve velocity, quality or reduce costs? Here we need to think about some measurable outcomes we are expecting. Here, we can just say we need AI because everyone is doing that. But why do we use it? What problems do we have?

And when we go through the whole this framework, analyzing the readiness, let's say for AI, it helps cut through the noise and forces teams to focus on outcomes, not shiny tools, not just shiny tools. And you can spot if you're really solving the right problem before investing time and budget. That's something like that.

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - Yes, well, quite thoughtful and love the fact that last R, there has to be an ROI.

Lucy Suslova (Head of Quality Engineering Excellence, Intellias) - Yes, it's not actually ROI in the pure sense of it, but it's something that you can really measure. It can be anything related to the process. Yeah, but yeah.

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - Yeah. Why have all these efforts been made? What you said is absolutely. I mean it's it fits the theme really well. So that's a really great explanation. So, moving to the next question is what are the key organizational challenges when implementing AI, especially around QA?

And how can we overcome this, you know, the resistance to change, you know, toward that, how do you work around with the mindset that we have seen in the traditional testing? Like you said, people don't even have automation in place, and AI is probably step two. So what are your thoughts?

Lucy Suslova (Head of Quality Engineering Excellence, Intellias) - Yes, sadly, sadly, I'm noticing when I'm not in that some teams, some products, they do not have this automation. I really cried. But that's another story. Yeah, actually, thank you for this question. It's really great because AI is a kind of innovation. It's something new. It's not something we used to do, used to use or something like that. It's very initial that there is going to be some resistance.

In the mapping to the AARRR framework activation stage, resistance is natural, especially when AI is seen as, I would say, disruptive. Many QA folks fear being replaced or doubt that AI understands context the way humans do. And that's fair, right? Because it's something new. We need to understand first of all how to work with it, what it is, and what we're going to do with it.

So I would say that frame AI as an assistant, not a replacement. Start small using AI to summarize, for example, test logs or detect flaky tests. So these are tiny steps, but they already enable you to eliminate that routine from your work. And that's really helpful. And this commonly the pain points, for example, test engineers hate and AI delivers real value fast.

And that helps to build trust. So I think that trust is a key point here. And instead of running, you know, like a top-down AI mandate that everyone should use AI or somehow embedded, implemented or whatever, it's better to create, let's say, UAEA champions or ambassadors within each squad or team. People who can explore, who can advocate and who can demystify.

Because right now, around AI, I believe there are lots of unclear things everyone is thinking about. Okay, I should try, I should use it somehow, but what does it actually mean, and what does it really mean for the testing? So no one knows. So you need to have, I would say, the team of ambassadors of the change. And through those ambassadors, you can work with the specific teams, with the specific setups, and with that resistance to change.

And you need to work on the habit, the habit of using Copilot, the habit of using AI for your day-to-day work. And then it becomes like routine. And for me, it's the same as with test automation five, 10 years ago. It wasn't something common, but we adjusted, we adopted it. And now it's like a very common thing for us. So something like that. That would say a grassroots approach accelerates activation and improves retention later on.

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - I agree. AI should not be seen as a blue-sky innovation. It is right there next to you. Let's use it day in, day out. Let's not think of it as a special project, but it is another necessity. So I completely understand. Probably I'll ask two questions, build up on what we just spoke about is what you mentioned in the framework, you had a brief touch on that, but how exactly would one determine that I need higher AI adoption versus in comparison to other companies.

And you mentioned that both test creation and test case from a user story to test case generation have been very easy problems to solve. You have historical data you can reference from, and a kind of English, and then describe that further in terms of test steps. But what else do you see in terms of AI can contribute, especially when it comes to running pipelines? You mentioned flaky test detection, extremely important. But your thoughts on those two problem statements?

Lucy Suslova (Head of Quality Engineering Excellence, Intellias) - Okay, so I'll first of all touch on the topic of the maturity of the teams, because I think that's really important. And if we are trying to understand whether you are you really need to implement some AI infused AI native solutions for quality process, and you don't want to buy AI just because it's cool, right?

First, you're evaluating the readiness. And here I would say that there is a set of questions. In my thinking, these questions can already give you some understanding of where you are right now. And it can be something like, we capture enough of QA signals, like overall, as a QA team? Is our test data structured and usable?

It's not only like the data we use for testing, but all the test artifacts that we have in place. And actually, do we have the right telemetry? Mean, structured data from test cases, logs, flakiness reports, and defect trends. Do we have all this data? And one of such questions is also like, are our pipelines stable enough for intelligent orchestration?

Because you know, like quality assurance, it is not living somewhere in isolation. It is embedded in each and every stage of the development. So it's very important that we collaborate with development teams, with DevOps teams, and embed quality within the delivery pipelines.

And if the answer to those questions is no, we pause, we just pause and focus on improving fundamentals, because if you do not have that foundation, it doesn't make sense to move any way forward. But if yes, we choose use cases that show early results like using LLMs to auto-generate test cases based on user stories, something like that.

And this ensures we activate AI in a way that fits our current maturity and unlocks quick value, creating the momentum we need for retention. And you also touched a little bit on test automation, like specific scenarios related to test automation and how AI can of the pipelines, and you mentioned flaky test cases, those are.

Okay, so here I would say that AI now offers the possibility for self-healing. That's actually self-healing is often associated with simple UI tweaks, but the real potential kicks in when you combine AI with the code intelligence, historical execution data, and dependency awareness.

And AI can monitor code repositories and analyze recent commits and build a map of which components were modified and what tests are historically linked to them, to those components, services, so on and so forth. And when something breaks, instead of failing blindly, AI can really help here. And actually, we have tried this approach and we see really great results. And what can AI really do that?

So, first of all, AI can trace the change across functions, modules, and services. It can suggest rerouting or regenerating the affected test logic automatically. So you don't even need to be involved as a, for example, test automation engineer, you have already drafted it, and you can just review it and accept or decline or modify a little bit.

But you don't need to spend a lot of time on, you know, like digging inside why the test is flaky, what was changed, and what was the overall impact. And actually, AI can even rewrite the relevant test snippets using learned patterns from similar changes. But for that, that's very important also to understand. AI needs to be trained, right?

So it needs to have all the data before we try to use it on a full scale, right? We need to make sure that AI has all the relevant data so it can build those patterns, so it can identify the patterns, and it can map correctly, like the starting point and the impact. And I believe that such kinds of UI changes and small code changes are really what contribute to test flakiness.

Because it's not really a box. It's not really defects product, right? And it's a very common thing that test automation requires maintenance. It's not something that leaves standalone, right? It constantly needs human intervention to just review the test run, identify what failed, identify if it's a real bug or is it a test flakiness, and I believe that each and every team has their you know, percentage of accepted flakiness because we're not living in an ideal world.

And that's what AI can actually do in the background. And actually, as a testing agent, AI workflows can be integrated into the delivery pipelines. For example, when the commit is done, when the push is done or when the new build of the app is assembled, right? So AI can generate some test cases, can run some smoke tests, it can identify some, you know, edge cases, something like that, and it can identify what was changed at how it impacted, like real test use.

And maybe another example, if a test suddenly fails, after a change, we can analyze historical runs and say, for example, I don't know, wait, this test has passed 95 % of similar contexts. This might be an anomaly, not a real bug. And that's actually what we want to have, just spending time on investigating that it was not a bug. So AI, in this case, can automatically trigger.

For example, a retry mechanism, right? So when we're designing our test automation frameworks, we're thinking about the stability, and most of the teams believe that they have this retry mechanism implemented, and it can actually be automatically triggered. And it can also do cross-comparison with previous test paths and identify the differences.

And also, it can recommend a human in the loop review, you know, because anyways, if we are talking about AI, we definitely can't just rely 100% on the AI-generated outputs or suggestions, whatever. So if the pattern is not identified, yes, it can raise a hand like Here's a red flag, you need to pay attention.

So in this way, test engineers are not distracted each and every time the test after the test run, so they can pay their attention only to significant cases that really require their time to be spent on this or that investigation. And this reduces noise and increases the signal. No more panic from flaky tests after minor changes, something like that.

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - In fact, this seems to be a very practical approach towards the AI implementation. I really picked up some great points here. Building on another question you just mentioned, raise the flag, but how do we make it a practice to ensure that QA engineers or teams are not over-relying on AI capability, because it's, at the end of the day, what you mentioned.

You need to train the model. And if your data set is not accurate, the output will be inaccurate as well. So I think this question probably, well, let's get to the answer of this question, what are those guardrails as an organization you're building to kind of avoid any risk around over-reliance on AI tools, especially in our.

Lucy Suslova (Head of Quality Engineering Excellence, Intellias) - That's very true, that there are some risks associated with AI, right? And even when you're using just ChatGPT, it says it has that bottom mark that you need to verify everything that AI says, right? You can't just blindly trust it. And that's true. And I believe that recently, all that AI stuff has evolved greatly. Really, yeah, I learned there is are lot of money spent by big companies on AI, right?

So it's not standing still. But anyway, we need to think about how we can protect ourselves and our product from risks associated with AI. And I think that this ties into the retention and long-term impact points of the AARRR framework. And once teams rely on AI, you need what you need. First of all, you need governance and clarity.

In my thinking, yes, you can agree, you can disagree, but I think that without proper governance and clarity, you cannot move forward. You can think about introducing, let's call them AI review checkpoints, where humans audit model outputs, especially for high-risk features. And I believe that each and every team is supposed to have that risk analysis, product risk analysis, right? Where you understand the criticality, priority, and risk associated with this or that part of functionality or feature.

And as a QA team, as a QA lead, for example, you understand which parts of the application require special attention because of the risks, let's say. And if you're implementing AI within the part, AI and quality assurance within the part, that is, let's say, high risk, you definitely need those AI review checkpoints. You need to verify; you need to spend some time before letting your AI models or AI tools operate independently.

So you need to invest and you need to understand that you need to invest some time and some capacity of your test engineers, maybe other team members like developers, DevOps, to assess the results that you had during the POC stage, let's say. I think that it's super important to train QAs to challenge the AI.

Because it's like a new skill for test engineers, for test automation engineers, how you interact with the AI, what you do with it, right? You need to know how to use this tool. And this requires a separate training and education, I would say. And your QA team, your test engineers, they should be able to ask such questions as Why was this test skipped?

Not just see the result and this test passed, this test didn't pass, and don't investigate the initial reason why it was that. So why was the test skipped? You need to clearly understand that. And for example, why did this anomaly get flagged? Can the model explain its logic? You can answer those types of questions. I believe that you understand what you have underfoot and how you can operate it.

Here I'd like to emphasize pairing, just like pair programming, for example, right? But with AI. And QA works with the tool, questioning, refining, and interpreting. That keeps humans inside, like front and center, even as AI becomes a trusted compiler. But anyway, right now we need to have that human in the loop, and we need to have, I would say, like a fallback mechanism.

If we identify that something goes wrong, or we are not happy with the results, or we do not understand why it's happening, or why AI behaves this way. We need to be able to switch to the more traditional processes. Just to keep everyone safe and our product, we're testing safe.

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - Wow, that's an interesting view. And with all AI, I can clearly gauge that you definitely prefer having a human in the loop. They're not going away. But I'm sure, as you mentioned, they will be able to focus on more strategic tasks, more creative tasks, but how does it really transform for you in your organization, a QA's profile or a QA engineer with all AI around?

Is it going to? You also mentioned that they are programming with AI, which means you need to be very, very business-focused. It's very difficult to train AI on a business workflow for a specific organization, on one side or the other. But with all that, how do you see this transforming the role of the QA engineers with all AI along?

Lucy Suslova (Head of Quality Engineering Excellence, Intellias) - That's actually a great question because what we see now on the market and what we saw on the market like 10 years ago, right? That's two different profiles. And I think that we're at a very real turning point now. And I really like, maybe you heard about the latest note from Shopify CEO, and he said about AI that AI is no longer a nice-to-have skill, it's a fundamental skill.

And to be more specific, he said something like, You can't have an engineering team that isn't fluent in AI, period. And for me, it marks, you know, like a new chapter in the skills, let's say history, right? So it's really, everyone now understands that AI is not just a nice-to-have skill. If you want to be more effective, your engineering team or QA team, more effective, more productive, and more focused on value, you need to adopt different kinds of tools, and AI provides a lot of support in that way.

And for years, we have said, test early, test often. But so much of QE work has still been consumed by repeatable tasks, writing the same scripts, maintaining flaky tests, parsing endless logs, and it's been very execution heavy. With the EI, that's starting to change. Like in my company, I already see that difference because our team's trying to adopt more and more of AI stuff in different aspects of software development and quality assurance.

And what we've seen is, I would say, move away from the test case factory to intelligent quality engineering, where QA doesn't just test, they guide the quality, and they become the ones designing how feedback flows through the system. AI takes over the low-value repetitive tasks, generating initial test drafts, healing broken tests, clustering failures or suggestion, or suggesting root causes.

And that frees QA's to focus on high-order questions. And what I mean by high-order questions, it's nothing new, actually, it's nothing new. These questions, it's like things like, are we testing the right things? Where are the real risks in this release? Or how can we shift quality left? Maybe even before the first line of code is written.

That's a very basic question, and each and every quality team thinks about that. But when it comes to reality, as a QA lead or as a QA team, what happens? You usually just don't have time to sit down and reiterate on these questions because you need to meet some tough deadlines. You need to create test cases. You need to ensure that the features delivered within this sprint are tested, covered by test automation.

You have to gather all the metrics, you have to analyze reports, you need to, I don't know, process some incidents coming from production, then you need to spend time on some, I don't know, organizational stuff, communication with team members, other team members, so on and so forth.

And you're just continuously in this loop. In the best-case scenario, you have your test strategy written once in the very beginning when you just started, and that's it. So you have it on compliance or whatever system you use. And you just think that, OK, I need to find some time to get back to it, to review it, to discuss it with the team and to align it with the evolved product needs, with the evolved requirements and so on and so forth.

So EI significantly helps here, freeing your time as a team lead or as a QA lead or as a QA team. And you can just sit and think, okay, what are we doing? Are we testing the correct things? Let's maybe think about our product from a different perspective. What are the ways we are testing types or whatever we haven't considered before?

So now you have more time for that, and QA starts collaborating more with the product teams, with the developers, and even with data scientists. And they help define quality signals across the delivery pipeline. And they might even be the ones helping to build or fine-tune domain-specific QA agents. AI co-pilots are trained on your company's product architecture test history.

And I think that actually is a very fun shift that QA engineers are becoming AI trainers in a way, because for AI to work well in testing, it needs context, it needs a lot of context. And who knows the product better than QAs do?

And like, considering the future of the quality assurance engineer as a profession, I think that it's definitely less button-clicking. It's more about that strategy. It's more about quality coaching and about data fluency and even a bit of ML whispering, something like that.

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - Wow, I think this is one of those sessions where I get to learn a lot of practical implementation of AI, and I really like the whole thought process. So it was an excellent session. Lucy, we are on top of the clock already, but thank you so much. It was a pleasure to have you, hear your thoughts and quite insightful.

While I said we are on top of the clock, I would love to pick your brain offline and build on top of what we discussed today. With that, a big thank you for being part of this conversation. Have a great rest of your day.

Lucy Suslova (Head of Quality Engineering Excellence, Intellias) - Thank you very much, and thank you for having me. I'm happy to share my thoughts.

Sudhir Joshi (VP of Channels & Alliances, LambdaTest) - Thank you.

Guest

Lucy Suslova

Head of Quality Engineering Excellence

Lucy Suslova is the Head of Engineering Excellence at Intellias, where she leads a horizontal function that drives the growth and adoption of engineering best practices across the company’s diverse Centers of Excellence, including Frontend and Backend Development, Quality Engineering, Cloud and Systems Engineering, and beyond. With over a decade of experience in the tech world, Lucy is passionate about helping the organization unlock its full potential through innovative practices and cutting-edge solutions. She’s focused on making things better, faster, and smarter—whether that’s implementing AI in testing, scaling cloud infrastructure, or pushing the boundaries of software engineering. Lucy thrives on fostering strong, cross-functional connections that fuel excellence, creativity, and technical innovation.

Lucy Suslova
Lucy Suslova

Host

Sudhir Joshi

VP of Channels & Alliances, LambdaTest

With over 8 years of marketing experience, Sudhir Joshi is the VP of Channels & Alliances at LambdaTest. In her role, she leads various aspects, including product marketing, DevRel marketing, partnerships, GTM activities, field marketing, and branding. Prior to LambdaTest, Sudhir Joshi played a key role at Internshala, a startup in Edtech and HRtech, where she managed media, PR, social media, content, and marketing across different verticals. Passionate about startups, technology, education, and social impact, Sudhir Joshi excels in creating and executing marketing strategies that foster growth, engagement, and awareness.

LambdaTest
Sudhir Joshi

Share:

Watch Now

WAYS TO LISTEN

SpotifyApple Podcast
Amazon MusicYouTube