XP Series Webinar

Navigating the Future of Quality Engineering in 2024

In this XP Series Episode, you'll navigate the Future of Quality Engineering in 2024 with LambdaTest's Survey Report. Gain strategic insights, trends, and innovations shaping the landscape of quality assurance.

Watch Now
Download Report

Listen On

applepodcastrecordingspotifyamazonmusic
Mudit

Mudit Singh

Head of Growth & Marketing, LambdaTest

Manoj

Manoj Kumar

VP - Developer Relations & OSPO, LambdaTest

WAYS TO LISTEN
applepodcastrecordingspotifyamazonmusicamazonmusic
Mudit Singh

Mudit Singh

Head of Marketing, LambdaTest

Mudit Singh, Head of Growth and Marketing at LambdaTest, is a seasoned marketer and growth expert, boasting over a decade of experience in crafting and promoting exceptional software products. A key member of LambdaTest's team, Mudit focuses on revolutionizing software testing by seamlessly transitioning testing ecosystems to the cloud. With a proven track record in building products from the ground up, he passionately pursues opportunities to deliver customer value and drive positive business outcomes.

Steve Caprara

Manoj Kumar

VP - Developer Relations & OSPO, LambdaTest

Manoj is an open-source enthusiast and has contributed to various libraries in the quality ecosystem. Notably, Manoj is a contributor to the Selenium project and is also a member of the project leadership committee for Selenium. He is an Appium committer too. Manoj is passionate about sharing knowledge and has delivered keynote sessions at Selenium Conference, STeP-IN, and SLASSCOM apart from other technical talks around the world. He is an avid accessibility practitioner and also a voluntary member of the W3C ACT-R group.

The full transcript

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Hello everyone! It's great to see you all on a different platform that I'm not usually part of. Welcome everyone to the exciting episode of the LambdaTest XP Series. XP Series is very different as part of the initiatives that we do at LambdaTest.

Through XP Series, we bring you the latest innovations and best practices in the field of quality engineering and software development with different experts in the software industry. Joining me today is Mudit Singh, Head of Marketing and Growth and also one of the founding members here at LambdaTest.

And for those of you who don't know me, I am Manoj, and I'll be the host or co-host rather, I would say, because I believe this is more of a podcast, and we will be having interchanging questions from both of us in a very exciting topic. And it's none other than the survey that we did recently. But before actually diving into that, Mudit, how are you?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Hey Manoj, doing pretty good. Thanks for joining me today in this episode. I know this is something that's pretty new for both of us, but I'm really excited about our conversation.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Very happy to be here, Mudit. Thank you. So this session is about the “Future of the Quality Assurance Survey Report” that we did a few months early. So we have some interesting insights as part of the survey, and it's something that we're going to talk about so that you get to know what that insight is all about. Right? So I think that's where I believe.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, in fact, this is the first year that we did this survey, and we had an overwhelmingly good response to the survey as well. But I still just want to, before the start of the session, wants to give a friendly disclaimer that this report should not be taken as our recommendations but more as a benchmarking on what the state of quality assurance looks like across the industry and what the current teams should strive towards in the form of numbers or processes or best practices when looking at this data.

And yeah, so with that said, let's get started on this. So, Manoj, we know that we created this survey as part of our Testμ initiative, kind of like giving a voice to the community as a whole. But I just want to hear in your words, like what has been our motivation behind starting and launching this survey?

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - It's a very tough question, I would say. The reason being almost everyone knows in the community that community is at the heart of LambdaTest, Right? So it's rooted in our commitment to understanding and navigating the ever-changing tech landscape.

And if you look back even five years or even last year, there is a lot of trend changing each and every year. And we have various community activities from LambdaTest, like Voice of Community, Spartans Program, and, more importantly, our flagship event, Testμ, where we bring in a 3-day online conference.

And actually, this is something that we started last year in 2023 when we did Testμ, and we announced that we've been doing the survey. And now that we have done the survey, we have some interesting insights for you. As we see, most of these events will bring you either covering one use case or a case for a particular topic.

But then we want to bring you all the insights of deep dive, how testing is considered, how quality is considered in an enterprise or a small startup to mid-size, right? So ranging from startups to big enterprises, how testing is viewed, how testing is considered in the overall SDLC phases, and how quality is being prioritized.

So that's how I believe we want to bring all of these insights into one report. And I believe we did that. And I was quite excited to see the results. But before diving into the actual sections of topics of a different survey, I want to ask you, how diverse was this report? Because that's very important, right?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So yeah, in fact, this was pretty good, we had a pretty good diverse range, I'll say, of people who all responded. So as you guys know, Testμ was a pretty big platform for us. We had more than 18,000 people who registered, and it was visible over here as well. We had around 1,600 valid responses from across 70+ different countries.

Though we had a pretty big demographic of test engineers who answered the reports, that was only around 51%. We also had a lot of developers or software engineers who took this survey as well. When we talk about the experience level, around 31% of the respondents were above 10+ years.

And we also had people who were starting in their testing journey or development journey. So around 30% of the people were under 3+ years of experience. But overall, the demographics were pretty mixed. We are 29% and 17% in the experience range of three to six.

And in terms of company sizes, the people who are working across different companies, we had a pretty diverse crowd in there as well. We had large companies, companies that have more than 2,000 employees. We have around 43% of the company's respondents were from large companies.

While 27% of respondents were from medium companies, and the rest were from smaller companies, startups, and early-stage startups. But yeah, like across the board, we have diversity in every part of things. And yeah, so pretty diverse group, I'll say.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Yeah, it's pretty interesting. I think more than the individual personas like as such, whether they are a junior or senior level or a mid-level engineer, I think what interests me more is about the size of the organizations. Because as we see the future of quality assurance surveys, that's very important.

And I see, as you mentioned, small being at 28%, medium being at 27%, and then large being at 43%, which is great. I hope that we will have a very diverse mix of all the wisers included in the survey, which I am very glad to hear. Thanks for doing that survey, Mudit.

Now, let's touch on the state of testing. Let's get into the crux of it. In such a diverse mix of survey responses, where do testers spend most of their time?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So yeah, this was, in fact, one of the most interesting data and kind of very interesting insights that we got out of this. So the first thing you expect is that your profile is SDET and your profile is tester.

So you will be spending most of your time writing tests, writing code to test the other code. It seems like it is a very important aspect, but it is not the most time that SDET or a developer, in fact, spends their time on.

So there are what I will say pre-test activities. Only 17% of their overall time gets spent authoring tests, planning the tests, planning the processes, and everything. A lot of time, specifically a lot of time after the execution of tests, gets there. So, for example, they spend around 11% of their time optimizing the tests, around 11 % of their time, the 11% of their time in triaging and debugging.

Nearly 12% of their time is spent on result analysis and reporting, creating reports after this. But there also I'll say some time that gets, I'll not say the right use that they have to do. So, for example, nearly 8% of their time gets spent on fixing flaky tests.

One of the biggest challenges that there have been. Nearly 8% of their time gets spent on fixing broken tests, tested to work earlier, but because of some change, they are breaking. So, that healing of the test part, again, that they spend around 8% of the time.

And one of the biggest, I'll say, the metric that was alarming here was test execution monitoring. So nearly 20% of testers just sit back and relax, and 20% of their time is spent just watching the test execute, right? So which I feel is something that can be optimized in the future, right?

This also kind of reflects back on the testing practices that companies are following and how they are exactly utilizing the time the resources they have. And in fact, this is also something that I think Manoj, you can help us out with.

So we did this survey; one of the questions in the survey was about the development budget. So what do you see how the companies are spending their development budgets on and what they should be spending their development budgets on?

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Sure, yeah, I think before touching upon that, it is quite interesting when you mention how testers are spending their time even before, even after scripting, and then running the test. I still remember back in the days, this was before the days when there was less cloud adoption. There was less of certain grid adoption where people used to run the test and distribute it.

But now, at this stage, there is a cloud adoption. So there's the age of cloud, and then there is no age of AI. It's still alarming to understand people are still trying to fix the scripts and then running it again, running it again locally, and then watching the script.

And I think that's really a pain point where I think that should be taken care of. And I hope to see that still improve because testers should actually spend time on more productive work in the future. So yeah, I think with the hope that it gets better in the coming years.

To answer your question on the development budget, well, you know there's no right answer for that. It totally depends. I've been a consultant before joining LambdaTest where I'm concerned with a lot of companies, especially enterprises, and then mid-stage, and then even startups for that matter.

The question, I think the answer to the question depends, and then I would start with the why principle, you know, nailing down the five whys and then getting the priority right and understanding where and how quality has been prioritized.

So how priority is quality for you? Of course, at the beginning, everyone says, yeah, it's a top priority. But then, be it the time given for testers and equality, I think we'll talk even more in the coming questions. But largely, what I want to touch upon is you should think about quality, which is good. And I think advocating for quality is very, very important.

But then people miss out on the cost of tooling when they allocate budgets. And then, or maybe when they consider tooling and think if the majority goes into tooling, then the time given to the testers is less. And also what's more important is even to consider the non -functional requirements. Well, testing functionally is very good. And are you testing for security? Are you testing for accessibility?

Are you visually testing and comparing two screens or better or not, right? To verify the user experience. So a lot of these things really matter. And I think you should make the call as early as possible and factor all of this into the budget. So with that, I want to see if this is something that the report said. Is there anything different with it that you want to share from the report?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So yeah, so again, if somebody wants to benchmark the report data that we have says that nearly up to more than 50% of the companies, teams that we surveyed, nearly 50% of the companies spend up to 25% of their overall development budget on testing resources.

Whereas the larger companies, where of course, they have bigger products to test, they spend more than up to 50% of their overall development budget on testing. So, 30% of the larger companies spend 25% to 50% of their overall testing budget on the overall development budget on the testing part.

So if you lie in that median number, that's good. But again, these are all benchmarking numbers. You can see, let's say, you can find out what your process looks like and adapt to that part.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - And when you mean resources, like 25% of developers, when you mean resource, it's not the actual staffing, right? It's about everything inclusive.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, it's about the two links. And I think the biggest spend is always on the infrastructure part of things, the environment and testing resources that are spent on running those tests, executing those tests, and in some cases writing those tests as well.

So tooling, infrastructure, and of course, the overall staffing requirements as well, that is there. But I'll say the biggest chunk of the people spend is there is around infrastructure and tooling.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Interesting, interesting. That's a great insight. Thanks for sharing. I want to move further and touch upon it slightly, you mentioned that the majority of the time is even spent on execution, and then there is a flaky test about it.

So before even getting into all of that, I think the important part is about the test authoring, right, Mudit? So from a test authoring perspective, I want to understand what those survey respondents were using at their work from a UI test framework perspective. Are there any insights you think you could share on this?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So yeah, I'll say this is something that is also visible in the broader community. People are bogged down with the fact that, with these statements, which framework is the most popular, which framework is most used, or which framework is better than another framework?

But in practicality, when we look at the survey, we also did the survey on which tooling people are using. But we got to know that nobody is using only one set of tooling. Nobody is using only one type of authoring tool.

In fact, 75% of organizations are using more than two frameworks for their overall test automation writing. And nearly 39% to 40% of companies are using three frameworks. And this is also visible. This is also required because, right now, the test authoring part is a very collaborative thing in nature.

This is what we are seeing in the survey as well; nearly 40%, i.e., 38.6% of companies, are there where both dev and QA write tests. So it becomes important that you choose the tooling that mixes and matches the skill set of both developers and QAs, the testers.

So the developers are usually experts in the framework with which they are building up the system under test. So that also starts to define what tooling has to be used. And the same way, whatever tooling is being used by developers, the QA team should also be an expert in that as well.

But with that said, there has been feedback that we received from a community related to the survey, specifically related to this question, is that we have not delved very deep into the overall tooling that is required across the whole life cycle.

And that is something we plan to iterate in our next survey that will be released in 2024. We'll be doing a more diverse or in-depth study on what tools and what exact tooling is required are being used across companies to their state cycle. So this is something we're gonna improve in next year.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - That's interesting. Thanks for acknowledging. I don't know how many of you do that. But I think that's nice, nicer feedback when we talk about quality as a whole process. And I think touching upon quality at every stage is very, very important.

And I believe always there's room for improvement, and we are very open to feedback. So I hope we can add that in the next one. So thanks for sharing that with it. I want to touch upon what I think we say AI is the future, right?

But then sometimes we forget or often are very curious to know, have people, everyone migrated or using CI/CD yet? So are there any insights from a CI/CD adoption that you could share with us?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So, company-wise, yeah, it is visible that the CI/CD adoption is pretty high. Nearly 88% of organizations across the board are leveraging CI/CD tools with large organizations, and nearly 90% of the large organizations using some sort of CI/CD tools in their organizations.

But that does not translate to the QA practices. So, for example, though 88% are using CI/CD tools, 45% of tests that are being run are still triggered manually, either through the local system or through a CLI. And one of the biggest advantages of a CI/CD system is distributing the test across different machines, scaling it up.

But 32% of the people are not running their tests in parallel. They are just running tests in a first-come, first-served sequential manner. Another interesting insight here was that 48% of people are still running tests on locally hosted or on an in-house grid.

They are not leveraging the overall cloud. They are not leveraging the overall CI/CD ecosystem. So even though CI/CD has been adopted, the CI/CD aspect in testing or even in automation testing is not leveraged fully. And that is creating a lot of issues, which I feel.

So for example, we earlier discussed that people are still facing flakiness. Nearly 11% of their time is spent on fixing those flaky tests. And this is because of improper infrastructure that is there. And nearly 11% of their time they spend on test environment setup and maintenance.

So because they're running locally or on a, let's say, self-managed grid, they have to spend a lot of time just fixing the infrastructure in place before running the test. So I'll say this is something that is an area of improvement in the year 2024. However, if you have already adopted CI/CD, 90% are doing that. How to properly leverage that for automation testing or running automation testing at scale.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Yeah, it's quite interesting on the percentage that you share, like 48% of people either run it locally or on self-hosted. I'm fine a little bit towards using the self -hosted. At least they're using something to run remotely so that they can save some productivity.

But it's quite alarming to hear about local machines. And that definitely needs to be improved. And someone who's running self-hosted, what are they missing out on running in the cloud? What is the advantage that you see between running in self-hosted versus running in the cloud?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So, of course, the most important aspect is the flakiness that we talked about. Running tests locally creates a lot of issues, specifically around creating a lot of flakiness. In addition, there is a lot of time that is spent across the teams, not just testers but also DevOps and the rest of the team members, in setting up and maintaining that test environment.

In fact, we have seen a lot of companies employing 5 to 10 DevOps engineers just to maintain the test infrastructure. It is there in the server report as well. But scalability and flexibility, I'll say that is one of the biggest issues. Now you are running tests, but as your product increases, as your team size and your practices increase, you will be running more and more tests.

So at one point, the system will break just because you are not able to scale with the products. So scalability is one of the biggest issues. And another is right now we are.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - I think when you mentioned setting up the team, and they have a dedicated team, I think one thing for me would be the diversification of where you're running your tests, right? I think the availability of machines or different combinations, be, you know, especially when it comes to mobile devices, even harder, right?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, yeah. So we, like though it's also there, we've seen, certainly as well, people are running tests across both emulators, simulators, and real mobile devices. In fact, we feel that this is the right strategy rather than just going with emulators or just going with real devices.

People should do a diverse mix of both. But that kind of distribution is very difficult to do when you are running tests locally, right? Connecting each and every device becomes a challenge. So it's always a good practice to set everything up on a cloud. And it's also easier to integrate everything on a cloud platform.

Your CI/CD platform is in the cloud, your reporting can be in the cloud, and your test execution. So bringing all that data together in one ecosystem, one place in a cloud-based platform makes your life much easier. So overall, the process becomes pretty optimized.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Absolutely, I think that's pretty straightforward. Thanks for sharing that. Now, slightly related to that, and largely towards running your tests smartly, or rather I would say orchestrating your tests smartly, right? Because as you said, I'll be going back to two questions before, which I spoke about where tests are spending most of the time. So what are your thoughts? Like, should companies run their tests?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, so now this is something we have been doing for the past five to six years, moving towards shift left. So people are increasing their overall automation. I'll say that overall automation that they are running, writing more and more of their test automation, moving towards continuous testing. These are, these are all things that are not new.

People have been doing this for decades and scaling that up. Like we have seen a huge scale in that in the last five years, but there is a very brute-force approach to this that is still going on. People are running all the automation tests all the time. The smart factor or the intelligence factor is not there.

And this is what people should do. So instead of just running tests brute force, run those tests in a smarter manner, which we right now are calling test orchestration, one of the most important aspects of this. We see that nearly 36% of the companies, in fact, larger companies, are not doing test orchestration in any way.

They don't prioritize which test should be done first. They don't prioritize what should be the order of those tests. And most of the testing is done either based on the criticality of the feature or the functionality of the feature that is being tested. But again, as I said, a lot of brute force.

Now the idea here is that we should cut down on the developer feedback time. So we should not, as we said, a lot of time is being spent on just monitoring the test execution. That should not be there the developer feedback time should be there. And in fact, right now, there are a lot of ways people can achieve that.

So, for example, one of the features of the LambdaTest HyperExecute platform is the fast mechanism. So what we do is based on the past tests that run; the next test that runs will first run the tests that were failing earlier.

So that way developers get to do that faster regression testing. They get to know faster that whatever they did last time has fixed the things that are, let's say, in the current iteration. And if not, they can go back to the drawing board. They do not have to wait for all the 60 minutes.

But then there is also another aspect related to what we can see load balancing across different machines. So now you are using hundreds of machines to distribute your tests across properly. Now, there should be a smarter way to distribute those tests as well to get the most optimum overall test execution times. And another, there are more future aspects to this as well.

And in fact, there is a role of AI in this. You should not be running all the tests all the time. So the smarter way to run those tests would be to select intelligently the test that has to be executed for the specific test that you are running. And, of course, just to build up confidence, you can run tests all the time, like all the tests all the time. But it's always better to instead have a smarter selection of the test tasks to be done.

But again, another concept that I want to build and bring into this is no matter how smartly you run your tests on, there are still big challenges in automation testing that we see right now. And that I want to call out the most important challenge that we still see is flakiness. So Manoj, what's your take on the flakiness still being one of the biggest challenges that this test has to face?

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Yes, certainly. I think flakiness, you can't escape from it, right? I think that's very straightforward. And I think last week I gave a talk in a meetup group here in Sydney around the topic called Continuous Testing with AI. And I think a couple of slides, we talked about flaky tests. And then, even during the post-talk, there were questions about flaky tests. And I think someone asked me about it.

Okay, I'm running my test locally, and it passes and when I run my test in remote, it fails. So where does the flakiness exist? Is it the machine, or whether it's the application? Well, I don't know the honestly the answer to it. I think that's where the testers' role should be to figure out why because if you run your test remotely.

It could use a network that is different system resources could be different. It might be optimized differently. Remember, those settings could be in your users' machines as well. So potentially, the same might fail in your users' machine as well. So I would say Flakiness is an opportunity for you to understand more about your application.

And I feel, and I think a lot of people still use a retry as a mechanism to get past the flakiness. And I would say rewrite would not be a rerun is not the ideal way, but rather, of course, when it's run again, sometimes it passes. And that's why we call flakiness.

But then ideally, and I think that's where the reporting mechanism would really help. And I think one of the interesting facts from the, even from the report that we saw is that 58% of the teams encounter more than one percent of their runs as a flaky test, right? And I think this is not from a small or mid-size organization.

Even large organizations that are up to 28% have said they face more than five percent of flaky tests. So I think overall what it means is the better infrastructure and good practices setting up. For the CI/CD, we spoke about continuous testing and also using AI wherever possible, you know, helping you out, right?

So as you mentioned rightly, running your tests monthly, not running your tests at every time, but rather if there is a way mechanism for you to run a relevant test based on your recent code changes, that could be one way where AI could help you run your tests based on code changes.

And then maybe it can prioritize your test based on your last couple of builds and say, hey, these tests failed in the last couple of builds; maybe you should run this first. Maybe because they fail fast, right? So failing fast is a key trend for continuous testing and CI/CD where we tend to, you don't know whether the product or the code that we wrote is good or not.

I think a lot of this revolves around that, and I would say AI will help us a lot to wrap up my thing on flakiness, yes it will be there, but I would say it's an opportunity for you to understand how you could improve your tests or maybe debug your applications and as I said a lot of this revolves around giving you more data. That means speaking about the observability part.

So is this something that you can touch upon? Did any of these responses from the survey talk about observability as a practice?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So yeah, in fact, that was a pretty interesting topic as well. And I was getting to that part. So we see that people spend a lot of their time in triaging, and debugging. So nearly 12% of their time is spent on triaging and debugging. And 12% of their time is spent on analyzing the test results. All of that is there in our separate report.

And there is also an additional time that is there, 8% on identifying flaky tests, which we talked about. But all of that kind of contrasts with like all of that kind of suggests that people spend a lot of time on reports. People spend a lot of time on this, have to spend a lot of time on this observability, whereas they are triaging or analyzing this isn't.

But it's contrasted with the fact that people have not adopted any specific tool for this. So nearly 30% of the organizations do not have any test intelligence and observability set up in place. And nearly 20% of companies do not have any fixed, in fact, basic test reporting setup. So people are worried and doing a lot of test automation.

But the post steps of that, analyzing those test results, is something they are still lagging behind. There is no, I'll say, right test intelligence and observability setup in companies, which will definitely be hampering a lot of their, let's say, scalability aspects in this as well.

So until and unless you get the right insights out of your test execution data at the right time, you won't be able to make the right decisions and also improve upon the quality. And this is something we are seeing in our data as well. I will say as a goal of 2024, companies should focus on building this set of tooling in their practices.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Yeah, I would say I think observability, just to add upon with it, I would say observability as a way of life, talking about not the open telemetry APIs, and of course, there are some parts of it, and I remember in the Selenium project where we also instrumented the Selenium code base with the OpenTelemetry API so you could see the traces of it.

But then, when you mean observably, it's all about the traces, logs, and metrics, right? So here, if you are not, I think, what you're saying about the report, especially people who do not have basic reporting, right? So that's very, very important and needs to be taken care of because the return on investment and the ROI on test automation will largely be known only by the history of reports or even by looking at your past bills, right?

I think that's one way to see how your tests have passed. Like, is it actually usable? People say if your automation code is not finding bugs, then it's no use, right? It's a different way of seeing. I would say rather, you know, use those test results and reporting mechanics and build it, as you rightly said, I think that if those companies who have not built in that mechanism, and I think that's where they should focus on, and that's very, very important.

And I don't know if AI can help there, but I think talking about AI, right? So let's jump in and talk about AI in testing. So the most important aspect of one of the key things to look forward to in 2024 is that if you open up any socials, you see about AI. As we talk about any execs, they have a budget for AI.

So speaking about AI with it, I see about 78% of software testers already adopting AI, which is not quite expected to be very honest. When I was looking at it, maybe it was around 20-25% or maybe 30% at the max, but I think 70% is quite large for software testers. Now, how should companies prepare for the age of AI?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - So we already see that people have adopted AI-based tooling, specifically the, let's say, things like text generators. So chat GPT, nearly 80% of the servers have used that or are using that currently. They are aware of it. In fact, for things like Code generation, GitHub Co-Pilot, and tools similar to that, nearly 45% of people are aware of those tools and are using them effectively.

So companies should start evaluating what kind of tooling is the right fit. To be fair, it's still a pretty early stage for most people, and everybody is kind of exploring what kind of tooling is there, what kind of tooling should help, can help. However we see that the overall outlook on these tools is pretty positive, with nearly 30% of people saying that they see these tools as having a positive impact and can create a way to increase the overall productivity of the team.

One of the biggest use cases that nearly 25% of people say that it can help bridge the gap between manual and automated testing. So specifically, for people who are moving from manual testing processes and want to automate a lot of those processes, there is a learning curve in that that is involved. So people say that nearly 25% of people say that it can help them do that.

But not just that, there are other aspects of AI that people are using as well. So all of this we talked about. In fact, most talked about things right now are related to GenAI, Chat-GPT, and everything. But there is another aspect of it which is Cognitive AI.

So things that look at large amounts of data and give insights. And knowing and unknowingly, people are still using a lot of those tools in the part in currently right now that involves this. So, for example, if anybody is using any visual regression testing, there's a chance that there is an AI involved in that.

So, for example, LambdaTest has its own Visual Regression Testing as an AI aspect in the back end. And it's not just that. So as Test Intelligence. So we see nearly 35% of respondents are using AI tooling for analytics and reporting. So looking at the large set of test execution data and creating a report out of it or creating an analysis out of this, 26% of people are using AI tooling for test optimization. We also talked about the orchestration part of things.

So the optimization and prioritization of testing is something AI is helping out with. And 18% of people are using it for scheduling and, of course, test orchestration. So the use of generative AI, specifically around test data generation and tasks like test case creation, is nearly 50% of people are using it for test data generation, and 45% are using it for test case creation, but using Cognitive AI is also there.

And that also brings up to the next set of things. We see that people are adopting CI/CD. People are adopting AI duly. So what is what you feel should be the roadmap for developers and testers? What they should be investing their time and efforts in upskilling themselves to get prepared for this, I'll say new advent of technologies.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Yeah, that question brings me to the edge of my seat. But I think, I see, I think you're right. I think a lot of people, those of you who already I talked to, are using AI in some way, which is interesting.

And I think when I talk about the use cases specifically, I think if you talk about testers, as you mentioned, they're already trying to get some smart mechanisms, like writing test cases for it, or even using tools like GitHub Copilot, where I think GitHub Copilot is one of the examples of a larger UI use case, right?

So if you have your own model and data, you can, of course, train your own. But that's just a small example of it. And even AI can help you bring in the test intelligence part of it. Like when you have a large number of pills, say you've run 100 tests, maybe what is the data? So we spoke about the basic test report in an earlier question.

Now, I think here, AI can bring you that sort of test intelligence and analytics out of it. And even to one step further, it can give you predictive analysis about how it can help you based on your test runs and all of that metadata, right? I think that really helps.

And also from a developer perspective, and I think, needless to say, right from a chatbot, what do we see in every website that we go to, most of the things are answered by that. I think the time that people have to wait for a customer call is reduced.

And you know, even from a development metric perspective, a lot of things have improved from that. I think AI is definitely helping. I think people are moving towards AI-augmented development or AI-augmented testing. I think that's really key to the answer. And I believe that's what's going on; it's already happening in some percentage, and I believe it will be even stronger in 2024.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, another great aspect that we got to know through this survey is that, which we were also talking about earlier, is the move towards CI/CD platforms. And this is something that I think testers should be upskilling themselves on.

So it's another tool set in their overall testing ecosystem that they would have to use. If you're not aware of CI/CD tools or using those CI/CD tools, I think that is something you can start learning more about.

For example, we see that even in right now large companies, SDETs, and QAs are responsible for integrating CI/CD, integrating automation tests in the CI/CD platform. So nearly 46% of companies, the job of integrating automation tests in the CI/CD pipelines is with SDETs.

So if you're still lagging behind in that skill set, I think that is something you can start improving. Another aspect is there are a lot of companies that are still not using the CI/CD platform. So if you are one of those 12.5% of companies that don't run automation tests as part of CI/CD pipelines, that is something you can, in an initiative, you can start on in your present company.

Another aspect we also discussed earlier is test orchestration. 36% of companies are not doing test orchestration in any way. So learning about how we can orchestrate those tests. I did this through the CI/CD platform. There are a lot of frameworks that support those kinds of capabilities or through dedicated tooling.

For example, the LambdaTest HyperExecute platform has inbuilt features on test orchestration, and inbuilt CI/CD features that can help you achieve that. How to leverage those kinds of tooling for your test orchestration, is something you should definitely spend your time learning more about.

I think we are just hitting on the time, Manoj. But the last thing we want to discuss, is this is a very big question. Though we have been talking about AI and all of those talks have been positive in nature so far, I still see that companies may be still hesitant to adopt AI, maybe as a broader investment.

What questions do you feel that the companies should answer before they dive into a dive headfirst into adopting AI duolings? What challenges are still there to be overcome in this regard?

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Yeah, I think that's a tough question. Probably we should ask that in the next survey, maybe, if we have not already asked them. But I think, in my opinion, I think AI adoption could seem easy at first.

But I think the key questions that I think would be around alignment with the business goals as a key point where using AI shouldn't be the only key thing. So if I'm using AI, I'll have a well-placed product. No, that shouldn't be the case.

But you should always see how AI is going to augment and help your product grow be it quality-wise or be it development-wise or no matter, right? So it has a product itself, and then see how it's gonna help me around it.

And I think one of the challenges would be around integrating with existing systems because I believe it's a green food product that already has its product at some stage and then trying to use AI.

So they should see how the onboarding experience is for any of these AI tools. Is it seamless? And I think that's one of the key questions to ask apart from talking about goals. And the next key thing would be around the ethical concentrations, right? So the bias, as they would say, sometimes AI could be biased in terms of how you're using AI.

I think from a testing perspective from an infrastructure perspective, it's very different. But I think overall from a usage of AI perspective, that's very, very important to understand how AI is going to help you and give that fairness. And I think the other thing would be around the availability of data. I think that's very, very important.

And I would say it changes differently from where you are coming from. Whether it's Europe, you have your own GDPR, sort of regulatory compliances, as I touched upon earlier. And then a lot of things come into the picture. I think this is just the beginning. And there will be a lot more things to come.

And also using AI responsibly is also one of the key things to consider. I think that's what I believe. But do you want to have? Yeah, go ahead. I think you're already excited to see something.

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, so we had this kind of question in our survey as well. And yeah, I will not touch upon the fact that security is one of the most important considerations big companies usually take in. But that is something they would be already looking at whenever they are adopting AI tools, how secure it is. But I'll say one of the biggest challenges that they feel right now is reliability.

So nearly 60% of the respondents feel that the biggest challenge with adopting AI is the reliability aspect. So these are still new tools. They are not 100 % perfect. And there is, as I said, a lot of machine learning that has to happen in this. So reliability is still a big challenge.

Another challenge that we see is what nearly 54% of respondents said was the scale gap. So even though there is AI tooling, how to properly leverage that AI tooling is something people are still struggling with. And considering that, yeah, adding this AI tooling to the already running cycles, already running workflows, it can become a little bit complex to get the right results out of it.

So complexity is one of the most important challenges as well. So, but yeah, I think as the tools start to improve further in the coming year, 2024, the adoption of these toolings will increase as well.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Yeah, that's quite interesting too. I think that's also covered. I'm glad that I touched upon something that's relatable to most of this. I think I spoke about being able to integrate with existing systems and then talking about the bias factor and also talking about the onboarding experience.

I think onboarding experience comes not only from two perspectives, but it's from the skill gap perspective, whether my colleagues, do my co-workers understand what this is all about. So that's interesting. So thanks, Mudit. I think that was wonderful. I think that, as you mentioned, that brings us to the very end of our session for today.

And thank you very much for sharing these insights. And so thank you so much for listening to this episode of XP Series with us, featuring Mudit Singh, the founding member of LambdaTest, and also who is Heading the Marketing and Growth for LambdaTest.

And I'm Manoj Kumar, heading the Developer Relations on this episode called the Future of Quality Assurance Survey. If you have any feedback, questions, or comments, feel free to share them with us. We will also link down the link to the survey. As I mentioned, the root of the survey was from Testμ Conference.

So, the Testμ Conference 2024 is happening from August 21st to 23rd, 2024. I look forward to seeing you there at the conference, and we will also be running this survey again. Is that right, Mudit?

Mudit Singh (Head of Growth & Marketing, LambdaTest) - Yeah, 2024, we'll be running it again and hopefully with better questions. So if you guys have any feedback on our present data, you want to know more about anything, or want us to deep dive in the next survey on anything, feel free to add in your insights, and we'll be happy to improve upon our current survey as well.

And most importantly do not forget to subscribe. We are coming out with more XP series webinars coming soon. So do Subscribe to our LambdaTest YouTube Channel to keep yourself updated.

Manoj Kumar (VP - Developer Relations & OSPO, LambdaTest) - Absolutely, that's very important. Do subscribe, guys. With that, signing off, Mudit and Manoj, and we'll see you in the next show, in the next edition maybe. Until then, do take care and have a great rest of the year.

Past Talks

Faster Feedback with Intelligent CD PipelinesFaster Feedback with Intelligent CD Pipelines

In this webinar, you'll delve into the heartbeat of modern software delivery and will learn how to optimize your CI/CD pipelines for faster and more efficient feedback loops.

Watch Now ...
Fast and Furious: The Psychology of Web PerformanceFast and Furious: The Psychology of Web Performance

In this webinar, you'll delve into the intricate psychology of web performance. Uncover the significance of prioritizing performance over design, understand why slow websites induce irritation, and examine the profound impact a 10-second response time can have on user satisfaction.

Watch Now ...
Revolutionizing Testing with Test Automation as a Service (TaaS)Revolutionizing Testing with Test Automation as a Service (TaaS)

In this XP Webinar, you'll learn about revolutionizing testing through Test Automation as a Service (TaaS). Discover how TaaS enhances agility, accelerates release cycles, and ensures robust software quality.

Watch Now ...