XP Webinar Series

Transforming Quality Engineering: How AI and Automation are Shaping Scalable Testing Pipelines

October 22nd, 2025

31 Mins

Watch Now
Shiva Kodithyala

Shiva Kodithyala (Guest)

Senior Manager - Quality Assurance, Bread Financial

Bread Financial
Mudit Singh

Mudit Singh (Host)

VP of Product & Growth,

LambdaTest

LambdaTest

The Full Transcript

Mudit Singh (VP of Growth & Product, LambdaTest) - Hey, hello everyone, welcome to our latest edition of XP Series. I hope everybody had their cup of tea, cup of coffee, or, starting the day off, no matter from whatever time zone you have joined. So we usually do this as a recorded session, but this time is a little bit special. This time, we are doing it live, and we're doing it across multiple platforms. And, really, thankful for everyone who has joined so far, and in the moments of joining, to join us today for this session.

So in today's session, we're gonna again talk about some of our favorite topics, that is, AI and automation, and specifically look at how companies like Bread Financial, companies like Enterprises, are looking at transforming quality engineering, and how AI is adding value, and how AI is helping shape the scalable testing pipelines. So, it's no secret, like, as software development speeds up, quality engineering teams are facing increasing pressure to deliver high-quality products faster.

Gen AI, Gen AI tooling, like GitHub Copilot, Claude, Codex, and all of these tools, are allowing developers to add more and more code, but at the same time, this code also has to be tested, and traditional testing methods sometimes struggle to keep up, specifically where AI-based code generation comes into play. And we also talk about in-sprint automation, so new features come in, people have to automate those, and at the same time, ensure existing pipelines work.

So, in today's session, we're going to explore how AI-driven automation is revolutionizing the quality engineering space how it is enabling teams to build scalable and resilient testing pipelines. We'll also want to discuss a little bit about how AI helps in things like predicting test outcomes, detecting anomalies and optimizing test strategies. But we have a lot of questions in place to ask us, joining our guests tonight.

So, we have ShivaKrishna Kodithyala, who is joining me today, is a Senior Engineering Manager at Bread Financial, with over 19 years of experience in quality engineering, test automation and platform engineering, where at Bread Financial, he leads a team focused on modernizing test environments, integrating AI solutions, and enhancing CI/CD pipelines. So, Shiva, first of all, thanks for joining us today and coming on the show to share your insights.

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Yeah, thank you, Mudit. Appreciate the background on the topic, and also thank you for having me here, and good morning, and good afternoon, good evening, wherever you are, based on your time zone. Yeah, I'd love to speak about, you know, share my experiences.

And thanks to LambdaTest, you know, recently you guys had a TestMu Conference, great turnout, and I also heard about these monthly webinars. You guys are doing a great job, you know, especially in these ever-changing things every day. And I think these kind of webinars will definitely help the quality engineering of the platform teams, you know, to scale up to speed according to the industry trends. But thank you for having me here.

Mudit Singh (VP of Growth & Product, LambdaTest) - Thanks for your words of motivation, Shiva, and we really hope to continue doing these kind of events, and hope to have your patronage for this as well. So, but yeah, jumping to the topic straight, right? So, AI, and it's a very hot topic. It has been for, I think, a few years now, but it's not just AI anymore, it's just… it's now the complete agentic workflow that is happening that has taken over while a lot of traditional testing best practices still remain, AI has added a new, I'll say, element to it, a new paradigm to the overall ecosystem.

So this is, in fact, what we wanted to start off, is how exactly is based on your experience in building test automation, performance testing, even DevOps practices, how AI is changing the way quality engineering teams are approaching, the overall software testing, and in fact, the overall product delivery.

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Yeah, a great question, Mudit, appreciate for asking that important question. I think it's an important… in everyone's mind, like, how can I use AI, you know, how it will help my day-to-day job, right? So, traditionally, if you see, you know, years ago, or even some of the large enterprise companies nowadays, they spent a lot of time in running the regression suits and, you know, creating the test, and then I know some of the organizations that run the test overnight, that runs, like, 500 tests, 1,000 tests, 2,000 tests that takes overnight, right?

So, how does AI help here, right? So, AI, definitely not a replacement for testers. I want to ensure that it is an intelligent assistant to embed the quality pipeline in the right way, right? So, examples, you know, different ways we can think about I mean, I'll talk more about detail, you know, in this meeting, but there are a lot of ways we can leverage AI to help the quality engineering's day-to-day life, right? Some of them are, like, you know, smart test selection, right?

So, using the AI models, what we can do is we can contextualize the models to see, you know, what kind of code changes we're making, and what is the commit histories, and then also, you know past, defects and trends like that. That's important, because one of these examples is smart test selection. Do you have thousands of tests to run, right? And then you have to run, you have to wait the whole cycle to complete, and then ship it to production, right?

But rather, the smart test selection gives, for example, if I have a login module, or a payment module, right? The engineer makes code changes only for the payment module, so we can train the AI models to pick only the payment-related tests, right? So that saves a lot of time. Traditionally, let's say, takes 2 hours to run, the payment test itself may take less than 5 minutes or less than 10 minutes, right?

So, this is one of the ways where, you know, AI can be leveraged to our SDLC life cycle, right? And then the other one could be flaky tests, right? So today, one of the biggest pain points for, you know, I'm sure all the quality engineers have is the flaky test because of the waiting times, or test data, and dependencies, or updating the UI every time there is a change, you know, engineers do not communicate with quality engineers, and then, you know, the test fail, you know, there's so many things, right?

Using the AI, we can also, you know, predict, or we can kind of update the test. There are a lot of AI mechanisms where the AI itself will update the locators and rerun the test and make sure, you know, the test pass is one of the examples, right? And then the other… other things are, like, primarily, like, anomaly detection, right? So, there are a lot of hidden defects in… in reality, like, you know, that is only uncovered in production, because we always see that, you know, we signed off the test, and then we go to production, and then suddenly we see a production issue. Right?

And then, those kind of things can be, you know, also, you know, AI can help in anomaly detection, like, where it can continuously monitor specific things, specific telemetry data, and then whenever it flags any specific latencies or anything, it immediately flags, you know, flags the engineer and then taking it, forward, right? So these are some of the different ways, you know, it helps the, engineers, quality engineers, or the platform engineers, right?

But it is also about the cultural impact. You know, AI is primarily freeing up all the, engineers' repetitive validation focusing on engineering quality rather than testing quality, right? That's an important piece, like, you know, we always focus on testing for quality, but then, with the AI help, we can make the engineering quality that's important from an SDLC standpoint, right?

And then we can… it also helps us to curate the smarter frameworks insights and optimizing pipelines, and then the important thing is also, you know, QE is always treated as gatekeepers, right? Whereas, with the AI, we can make them as enablers, right? Proactive rather than reactive. So, there are a lot of ways, which I'm going to talk about, you know, some of them, you know, today. But yeah, there's a lot going on, you know, in the industry.

Mudit Singh (VP of Growth & Product, LambdaTest) - So, there's a lot to unpack in last 5 minutes. How is AI helping us out? I'm taking some notes. So, to summarize, we have defect detection, we have auto-heal, we have flaky test detection, we have anomaly detection, and that is the… in fact, the best part is removing the focus of teams from doing manual, redundant efforts, instead of focusing more on the quality side of the application, instead of focusing… making the overall process more scientific in nature, rather than doing just rote work.

So that is, I will say, one of the biggest, value adds that AI-based tooling is helping us out. So, let's double-click on a few of those things, right? And I feel that the first one is the prediction part that we were talking about, defect detection and, prediction of test outcomes. And at the same time, how that is gonna help reduce manual effort. So, based on your experience, like, what is the best use case of AI specifically for this, thing, that how you predict test outcomes?

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Yeah, I think a couple of things we are actually trying. The smart test selection, we are training the, you know, we are training some of the models for the smart test selection. As I mentioned, gave an example, when you have a payment module, you know, you only run the payment-related tests, because your test automation tests are huge, you know, when you compare the whole organization, I mean, your scrum team or your product teams might have a smaller source, like 300, 400 tests.

But when you look at the bigger organization, when you're on end-to-end test, it will be huge, right? So definitely smart test selection is something, is going to definitely play a great role, and we're actually, working on it right now. Hopefully in a, you know, sometime soon, we will have it implemented. We have a proof of concept done, and also, another thing, just to share, you know, in one of the recent hackathons, we also did the smart test selection by training the models.

And the cool thing is, it's not only about just the quality engineering side, this can be also leveraged for the engineers, you know, primarily for unit tests or integration test, component tests, right? Because it saves time everywhere, like, you know, you don't need to run the whole set of tests. Typically, the unit tests won't take much time, but still, there are certain integration tests or contract tests that will take time, so you can leverage this process, or, you know, this technique can be leveraged across the SDLC wherever it is applicable.

So, smart test selection is something, you know, something we're definitely trying. And then the other thing is also about auto-healing, right? So, because this is one of the important things, you know, for all the UI automation engines, it's something they keep complaining about is like, hey, you know, you changed the UI, you changed the UI, right?

You did not inform me; rather, you know, there are a lot of AI techniques that are coming in, or the tools are there to leverage. Whenever there is a DOM changes, it automatically heals and then runs the test, right? So, these are some of them, practically, we can apply and see a quick wins for the, for their organizations.

Mudit Singh (VP of Growth & Product, LambdaTest) - So that also, in fact, kind of connects back with what we have been building here at LambdaTest, and feedback from more enterprise customers, so this is coming from my experience, is that, yeah, a lot of AI-based tooling is helping us, figure out what type of test to run first to build up a faster developer feedback time, and then maybe continue to run the rest of the tests to build up a confidence.

So, figuring out which tests to run immediately based on, let's say, developer commit, or based on business risk, that's significantly add value, specifically when developers have to wait to ensure whatever they have built is working or not, so that developer feedback time is reduced. And also connecting the dots, so if we are able to intelligently predict what to run and what would be the possible outcomes to figure out if that run fails, so that is something, that is a great value add.

But this is before running the test. Now, once we have run the test, there are tons of logs, and figuring out what is breaking is becoming another challenge, and I think this is something we were talking about just right now, anomaly detection, figuring out correct detection, right? It's flaky tests, so how is AI helping you out in all of those tasks?

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Exactly. Yeah, so I gave an example about, you know, one of the common problems the UI automation engineers always have is, you know, ever-changing UI, there is no communication, collaboration, you know, things happen. So the, you know, with the AI techniques, there are some tools already available, I think an example, you know, people who use Selenium, there is some plugin called Healenium. Healenium, you know, it can automatically, based on the UA changes, it adjusts the locators and then rerun, right?

So, that is a classic example of, you know, how things are changing and how the AI can help you know, fix the test or ensure the stability of the test. So, an example. And the other one is anomaly detection, as you mentioned, right? So, anomaly detection is primarily where we have the AI tools or techniques available to continuously look at the logs, telemetry. You know, typically, you know, let's say you have an API that always gives a 200 milliseconds as a response time, but whenever, under specific data conditions or specific things.

You have to wait for a performance testing team to run a huge load test and give the results, right? But whereas we can use AI techniques to kind of flag the respective module. When there is a deviation, it can alert, hey, this is somewhere, you know, we predict that it's gonna break, or some kind of techniques could help us. There are different ways. Flaky test is one, you know, fixing the flaky test, anomaly detection.

And then also defect trends, right, where, you know, basically you can train the models, what are the different… look at the historic trends and see, what kind of, you know, changes are making, what kind of defects are coming in. The AI can predict these are the areas to look at, or, you know, make sure that those areas are fully covered, right? So these are some predictive things that AI could help with, as we move forward, you know, in a fast-paced environment.

Mudit Singh (VP of Growth & Product, LambdaTest) - So, that's, in fact, another interesting aspect, which I also want to double-click on this, is the auto-heal aspect of things, right? So, it's not just, in fact, even about locators anymore. Earlier, one of the ways people used to work on auto-heal was that whenever a test is built, instead of one locator, they cached it or stored it in all the locators, and if something breaks instead of one, they tried out other locators, and if something stuck, that became the auto-heal thing.

But now, things have advanced a little bit further than that. It's not just about the locator strategy anymore. It's more nuanced, and in fact, more, context-aware than that. So, have you seen this kind of, this kind of feature set that is adding value to your, let's say, auto-heal outcomes?

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Definitely, definitely we have seen, as you said, as you right, listen, it's not only just about locator, it's about the DOM changes, a lot of things changing in the UI. Especially, you know, we have certain third-party tools, you know, Salesforce or other tools, where you know, there's a lot of releases happening, which is completely out of your control, because they're third-party enterprise organizations, right? So some of these things definitely are going to help, you know, to predict, or to kind of, auto-hee, and then run the test.

So I've definitely seen some of these examples, and practically how it helps. But yeah, to answer your question, yes, definitely, locators is, like, just a starter, but then there is more to it, DOM changes, UI changes, you know, it predicts and gives the complete PR, and then where you can review and approve and merge, and then things would go on. Right?

So, that's the way, you know, where we are trending towards, rather than one of the challenging things is also, like, you know, you spend a lot of time in debugging, analyzing, and things like that. However, with the AI's help. It's much faster. You know, trust me, maybe it used to take 4 hours, 5 hours, but with this, it would probably be less than 10 minutes, less than 5 minutes.

Mudit Singh (VP of Growth & Product, LambdaTest) - And it also adds a lot to the resiliency to the overall existing pipeline, right?

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - That is important, yes.

Mudit Singh (VP of Growth & Product, LambdaTest) - It's not just about building up, let's say, or figuring out what breaks, it's also about ensuring that it does not break further, or if a break happens, we are aware of why it is breaking. So, let's, let's listen from your example, right? Like scalability and resiliency, how AI is kind of adding to that, specifically when, we are doing, like, releasing software at a very high speed.

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Yeah. So scalability is definitely, you know, when originally when we started automation, people used to run a lot of tests, and then slowly we evolved, right, with the scalability, you know, traditionally, it used to take, like, you know, probably to run 1,000 tests, you know, people used to spend 8 hours, 9 hours running the test, right? So, with the evolution of, you know, devices, like, you know, LambdaTest, BrowserStack, and things like that, things can run much faster, right?

So scalability and parallelization is really, really important because that will allow you know, to build confidence, to trust the pipelines, to trust the automation, and then shift faster. Right? So, there are a lot of things that, you know, to enable scaling and parallelism, there are a lot of things that, you know, as a quality industry platform, can think about, right? A lot of newer techniques, like, you know, building the dockerization, you know. And traditionally, a lot of organizations also use the on-prem, you know, they host their on-prem Jenkins, or CloudBees, or, you know, all these things.

But with the cloud and all, it's much faster, much faster to adapt and then use it. You know, I know LambdaTest has, you know, not to promote the LambdaTest product, but, you know, HyperExecute is a great example, you know, how do we scale, how do we run faster? I know there are a lot of other products available in the market as well in respect to the scaling and you know, execute faster, right?

Also, the important thing is also about shifting left, you know, traditionally, people always, you know, when you have QA, dev, QA, staging environments, people always use to test more on the, you know, on the staging environment right before it goes to production. Right? But the important thing is shifting the left, and then also the important thing, I'll say shift smart. Not just the left, right?

Not just test your pipeline just at a, you know, early in the cycle, but also use these AI techniques that I'm talking about, or there are more, I'm only talking about a very few here, but use those techniques and then catch these, you know, defects or issues early in the cycle, so that, you know, you can ship it much more confidence and faster.

Mudit Singh (VP of Growth & Product, LambdaTest) - Awesome, awesome. So now, let's shift gears a little bit. Like, we have been talking about how AI is adding value, how AI is helping, but for somebody who has still not started that AI journey, like, they have still not started to, add any AI and AI agents into their existing quality engineering processes, so what would be your first advices as practical strategies on how somebody can get started, in integrating AI in their quality engineering workflows without disrupting.

Let's say, their existing processes, because that is the first question everybody has, that I am not gonna reinvent the wheel. I'm just gonna enhance it, if it works. So, how exactly does that look for somebody who's just starting out in that journey?

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - I think it's a great question, because, you know, a lot of the enterprise companies, you know, they have their own processes or, you know, governance policies, etc. So it is important to understand how you can embed these AI techniques into your existing flows, right? So there are several ways I will talk about, you know, some of the tools, and I'm not promoting any of the tools, but I'll tell what I know, but available. I mean, I'm not paid for that, but, you know, you are, but, just kidding.

But yeah, I'll talk about what I know, how, you know, the organizations are, you know, doing that. There are several things. For example, you know, if you're, if people are using Cypress, recently they launched PsyDoc Prompt, where you can type in natural language. And then it will convert into the script, right? That's… how cool is that? You know, you just type in, I think just recently they launched, and they had a webinar, multiple webinars recently.

So Cyrun's prompt is such a powerful, you know, technique, where you just type in the natural language, and especially, you know, traditionally, a lot of enterprise organizations still have the manual test tools. They can leverage this and slowly become, you know, the quality engineers as deaths. Right? And then PlayRate also has AgentQ, where it's not like a natural language, but, you know, it will definitely use or predict what we are trying to do in terms of scripting, and then complete it, right?

So, you know, if you're talking about test case creation or script creation, you know, Cypress has that, Playwright has this, and then also with respect to the, and if it's talking about test case creation, right? I know LambdaTest has LambdaTest TestManager, where you can feed in your requirements, and it will convert into the test cases. You know, a lot of these organizations now have, you know, Tricentis has one, and a lot of organizations have converting these, you know, the requirements into test cases.

That's one way to think. And then also going to the you know, the examples that I mentioned about maintenance, right? So Helium is one thing where, you know, it will change, apply the strategy, and then auto-heal the test, right? And then also the other examples I can give you is, like, GitHub Copilot, right? We have seen, personally, we have seen, we have seen in our organization where, traditionally, like, you know, API tests or, you know, some of these, you know, tests descripting. The 60% code is already there, like, when you use the co-pilot or, you know, these things.

We can contextualize and, you know, give the right inputs, and then it will generate 60% of code. How cool is that? You know, typically it takes, you know, 8 hours, or 10 hours or 2 days to write this test, but then, you know, within 15 minutes, you have 60% of the code, then as a quality engineer, you just come and write the remaining, you know, complete the rest of the 40%, right? Again, it has some caveats or trade-offs, you know? You know, don't blindly, you know, trust the tool. You know, it takes time, it takes years and years, or maybe months and months to train the models, and then use them.

And then Katalan, has, has a tool as well, you know, similar to Copilot, right? And, LambdaTest also has a smart UI, you know, thing as well. And then, even also, not only that, not only with respect to the disk creation or maintenance side, even on the observability side as well, like, you know, Splunk has Dynatrace, New Relic, they all have the AI-based related things that we can leverage and use it in our, you know, day-to-day practice.

You know, you can absolutely, you know, I'm sure some of you might be using the tool, but you don't even know how you can utilize this in your day-to-day life. You can talk to them or go through the documentation, have a meeting with them, get more insights, how you can leverage those techniques to help your, you know, day-to-day life.

Mudit Singh (VP of Growth & Product, LambdaTest) - Cool. So, another, very, interesting point we just, clicked on that, which is, do not trust the tool blindly, right? So, what other, I'll say, words of caution? Again, somebody who's starting to implement what other, things they should be careful about before diving headfirst into maybe a tooling that they do not understand.

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Yeah, absolutely, like, you know, I think the thing is about, it's, it's a cultural shift, basically, right? So we need to, we need to see, like, every stage of life, like, every stage of SDLC, primarily, you know, when you, do the scripting, or maintenance. My suggestion is to slowly roll out. You know, see? Pick up a module where you can leverage a small piece or leverage a small team where you can leverage, and then see how it is helping, and then train your model slowly, make it, make it better, and then slowly roll it out to the other teams, right?

So, that's a key, like, slowly, slow and steady, you know, slowly roll out and figure out how it is behaving, how it is contextualizing to your organization, because every organization has its own tools and processes, and you need to identify the right areas where you can apply them, right? Be it in the optimization, be it in the creation, maintenance or observability, you know, there are several ways where you could apply and take it forward in your organizations.

Mudit Singh (VP of Growth & Product, LambdaTest) - So, we are right on time, I think we just have a few minutes left, but I want to close the session with. I'll say last conclusive lesson that, you want to share with, the rest of the let's say, people who have joined us today, specifically from enterprises, like, what would be the core lessons that you want to impart to them, specifically the quality engineering team, who are in that transformational journey that they can use to improve their testing practices.

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Yeah, rightly asked, Madhid, because that's important. See, one of the important things is QE treat the… not only just for the engineers, even for the leaders or, you know, the C-level executives, right? My suggestion is treat quality engineering as a platform, not like a project, right? Hey, did we do testing? No, not like that. It's more about… treat QA as a platform where, you know, shared framework, reusable framework, shared environments self-service pipelines, utilities, tester sets that can be used across the board.

Because I've seen, traditionally a lot of companies have Team A and Team B, they do a lot of redundancies, right? So treating QE as a platform will help reusable frameworks, shared environments, and the important thing I touched upon slightly is about shift-lift, but also shift smart, right? So, use these automation and AI together to identify the right areas, so that we can catch the, you know, issues early in the cycle, and adopt faster release cycles to the organization, right?

And the other thing is also about observability. Observability is very, very important, because a lot of times, there are a lot of hidden defects within the life cycle, right, as I mentioned. So, use these, you know, use the telemetry data, logs, metrics, analytics, and get these insights, and then identify these things early and the final important thing I'd say, build a culture of collaboration, right?

That's really, really important. The culture is also, you know, a lot of teams are resilient, not ready to move forward to adapt it, but slowly roll it out one at a time, and then leverage, you know, these, you know, these techniques, and then take the advantage of, you know, the faster, you know, issues identification in this SDLC lifecycle, and then take it forward in everyone's organization.

Mudit Singh (VP of Growth & Product, LambdaTest) - Awesome, awesome. I think those were really, really great last, lessons for everybody involved here to go back to, their work or go back home to, think upon. We covered a lot today. In this session, we talked about defects, we talked about auto-heal, we talked about flaky tests, anomaly detection, and most importantly, we talked about best practices, the best engineering practices that from tools standpoint, and from processes and culture standpoints, companies should be implementing it.

Again, Shiva, thank you for sharing your insights and all of these things, and I learned a lot, I'll say, and I'm quite sure people who have joined us today learned a lot as well. For people who may be asking this question, thus, again, this session was recorded, and we'll be sharing this across our YouTube channels and other platforms pretty soon. So you can go back and touch base.

And if you have any questions for us, again, feel free to DM us. My LinkedIn, in fact, Shiva's LinkedIn, is pretty much open. Feel free to reach out to us, or when we share this on YouTube, feel free to add in or ask your questions or comment there, over there as a comment, and we'll be happy to answer them. But yeah, again, Shiva, thanks for taking the time out and sharing your insights with us.

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Thank you, Mudit, and thanks to LambdaTest for hosting these kind of webinars every month. This is really helpful for the lot of engineers, because, you know, as I mentioned, like, every… every day new technology is coming in, new things and new tools are coming in. I think these kind of, you know, webinar will definitely help, a lot of engineers to upgrade themselves, especially with AI and other things, you know, keep doing this, you know, definitely, whatever I could do, I'll definitely chime in and help the teams to share the knowledge and insights.

Mudit Singh (VP of Growth & Product, LambdaTest) - Thank you for your words of motivation, Shiva, and for the rest of the team. People, again, feel free to subscribe to our YouTube channel. We share a lot of these insights, what we are talking about over there. And yeah, looking forward to the next time we do the same again. Thanks, Shiva. Thanks for joining us today.

Shiva Kodithyala (Senior Manager - Quality Assurance, Bread Financial) - Thank you. Thank you, everyone, for joining.

Speaker

Shiva Kodithyala

Senior Manager - Quality Assurance, Bread Financial

With over 19 years of experience in Quality Engineering, test automation, and platform engineering, Shiva Krishna Kodithyala is a seasoned leader known for driving innovation and optimizing software testing processes. As a Senior Engineering Manager at Bread Financial, he leads a team focused on modernizing testing environments, integrating AI solutions, and enhancing CI/CD pipelines to improve overall engineering productivity. Shiva has extensive expertise in test automation, performance testing, DevOps practices, and cloud-native solutions. He is passionate about enabling organizations to boost efficiency through automation and AI and has played a pivotal role in enhancing the reliability and scalability of test pipelines. His efforts have consistently reduced deployment failures and accelerated delivery timelines, driving improvements in software quality.

Shiva Kodithyala
Shiva Kodithyala

Host

Mudit Singh

VP of Product & Growth, LambdaTest

Mudit is a seasoned marketer and growth expert, boasting over a decade of experience in crafting and promoting exceptional software products. A key member of LambdaTest's team, Mudit focuses on revolutionizing software testing by seamlessly transitioning testing ecosystems to the cloud. With a proven track record in building products from the ground up, he passionately pursues opportunities to deliver customer value and drive positive business outcomes.

LambdaTest
Mudit Singh

Share:

Watch Now

WAYS TO LISTEN

SpotifyApple Podcast
Amazon MusicYouTube