XP Series Webinar

From Brainwave to Inbox: Avo's Whimsical Adventure through AI-Powered Test Automation

In this XP Webinar, you'll delve into Avo's magical AI-powered test automation journey, unraveling the whimsical adventures from brainwave to inbox, showcasing transformative innovation along the way.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Manish

Manish Jha

Senior Director - Product Head, Avo Automation

WAYS TO LISTEN
applepodcastrecordingspotifyamazonmusicamazonmusic
Manish Jha

Manish Jha

Senior Director - Product Head, Avo Automation

Manish Jha, a seasoned product leader, excels in crafting innovative solutions across diverse industries, including automation platforms, Web and Mobile apps, social and gaming platforms, and data analytics tools. With expertise in product road mapping, strategy formulation, and fostering strategic partnerships, he navigates new markets successfully, delivering impactful results.

Kavya

Kavya

Director of Product Marketing, LambdaTest

At LambdaTest, Kavya leads various aspects, including Product Marketing, DevRel Marketing, Partnerships, Field Marketing, and Branding. Previously, Kavya has worked with Internshala, where she managed PR & Media, Social Media, Content, and marketing initiatives. Passionate about startups and software technology, Kavya excels in creating and executing marketing strategies.

The full transcript

Kavya (Director of Product Marketing, LambdaTest) - Hi, everyone. Welcome to another exciting episode of the LambdaTest XP Series. Through XP Series, we dive into a world of innovations and insights featuring renowned industry experts and business leaders in the testing and QA ecosystem.

I'm your host, Kavya, Director of Product Marketing at LambdaTest, and it's an absolute pleasure to have you all with us today. Today's webinar explores the fascinating journey of AI-powered test automation with our guest on the show, Manish Jha, Senior Director of Product Head at Avo Automation.

Manish is a seasoned product leader known for crafting innovative solutions across diverse industries. He brings a wealth of experience in product road mapping and strategic partnerships, and his customer-centric approach and expertise in agile methodologies have consistently delivered exceptional results at Avo automation.

Avo Automation is an AI-driven, no-code end-to-end test automation platform that enables continuous quality assurance. Its solutions, such as test data management and machine learning capabilities, help teams improve the quality, reliability, and of course, speed of their software development process.

It's a privilege to have you join us today. Welcome to the show, Manish.

Manish Jha (Senior Director - Product Head, Avo Automation) - Thank you, Kavya.

Kavya (Director of Product Marketing, LambdaTest) - Great. So why don't you tell our audience more about yourself and your journey?

Manish Jha (Senior Director - Product Head, Avo Automation) - Sure, I mean, you've already touched base, but quickly in the interest of the audience. I've been into product management primarily the way I started my career, though that was not the title back then. When I say back then, I'm talking really little down-the-calendar memories around the early 2000s.

But yes, my interest has been around product design, product management, and crafting out what should be the solutioning around the product, even though when I was not directly linked or designated product management guy. My sort of expertise comes from the recent one before Avo into automotive, which I was working with was the largest dealer management provider.

And then, of course, it varies from input, AI, and ML space. I started my career in gaming, which is quite different in India. But yeah, that's how I started. And then, I moved on to different roles, of course, and have worn multiple hats.

Key areas, as you rightly said, are product road mapping, product pricing, strategizing, keeping an eye on the trends and the competition, and of course, understanding how we are still relevant, and sort of course correcting every now and then with the roadmap.

I think that's my key area, and I love to be that as the Head of Product at Avo Automation, that's some of the things that I definitely take care of, along with, of course, the extended function of product marketing and support.

Kavya (Director of Product Marketing, LambdaTest) - Very interesting to hear that is definitely diverse from gaming to software testing or QE space, right? That is quite a journey. Thank you so much for sharing that, Manish.

So I was recently reading about the origins of AI and came across an interesting example, by the way. It is said that it was found in Greek mythology Hephaestus, the god of craftsmanship apparently had developed mechanical attendants who were, you know, who could basically think they were capable of independent thought.

I'm not sure how much, you know, of a truth lies in that, but that was very interesting. And yeah, I mean reading about that and all the way to the latest advancements that we see, right, AI's journey has definitely been very fascinating for all of us.

And especially working in the QE and testing space, the future of testing through the implementation of AI is something that we've been talking about and reading about on a daily basis now. And today, we are here to talk about Avo Automation's adventure through AI-powered test automation. So Manish would definitely love to start with your thoughts on it.

Manish Jha (Senior Director - Product Head, Avo Automation) - Sure. So I'll give you some references on how internally we are strategizing and leveraging AI or Gen AI, which is a new buzzword these days, of course. And some of the things which are in flight where we are positioning ourselves in the coming next 6, 12 months.

And what will make it different, of course? But if I have to talk about it, it's a couple of multifaceted areas. When we talk AI, there are so many flavors to it, so you still have to extract what is relevant for you, especially in the test automation world.

So I'll keep it a little bit precise to the boundary of test automation. And the way I look at it is that, definitely, there is a first bucket which I put in is the test data side of things. When we talk about test automation, I personally feel that test data cannot be regarded. It's very tight, coupled with another subdomain, take it along with the journey of your testing cycle, so that's an area.

And I see definitely that there a quite diverse and realistic test data sets that are also being developed in the market. I see a couple of providers, I see that. And internally, we are also leveraging that.

So that when you're actually testing your software or your application under test, your tool might do a job of automating it, but what you're feeding in and what you're getting out of it also matters, right? And that's when you are actually calling it a QA-certified or QC-certified product or a build.

So having said that, I think it's very, very important that we also are looking into creating those people data sets, now whether it is for enterprise applications like SAP and Oracle, or is it about a simple, let's say a banking application, and you want to test, for example, let's say, half a million credit cards that are being used by that particular bank, for example. So how do you test that?

So I think having that is an area definitely where we are using Gen AI to produce those synthetic data, and it is helping our testers a lot. It also reduces a lot of manual effort, right? Because you are now sort of covering an end-to-end spectrum and giving a realistic breakdown of whether your system would be able to handle it or not. So that's one.

The second area, though many people would have already heard about it. Some companies are already venturing into it. But there is still when we talk about automation in a nursery, I say a lot of manual work is still being done. And when I say manual, I am talking about a stage before the testing, right? So what happens before the testing?

So you know, a customer might have given a company a dump of some requirements, or your business analyst would have gone on site and taken a dump of, let's say, these are the customer ask, or for example, it's a product-based company. Also, this is where we will innovate.

So there would be certain, you know, an SRS document or an Epic in any format, basically a requirement. Now writing that requirement cycle of explaining that back to the team of engineering and manual tester and then saying, hey, this is what it looks like a feature X, right, or this is the product Y that you need to develop.

Did you understand that? And then from that, the testing team would also sort of start adopting that and then write some manual test cases. So you see, this is a lot of manual work. First is the understanding part, which is manual rational thinking.

And second is of course, translating what the next person has thought about as a figure of a product feature. It needs to translate well. So that's where we are leveraging Gen AI because we feel that you can actually leverage and actually be more productive and utilization of your resources and these bandwidths in a better area.

There's no brainer in this if you can actually leverage some of the existing technology. And that's what Avo is actually also doing in terms of from requirement to converting into manual test cases.

And then from manual test cases, phase two of what we are doing is also creating an automation script, which is Avo compatible script. So you basically get an end-to-end spectrum of automating from requirement to automation. So some of the adventures.

Kavya (Director of Product Marketing, LambdaTest) - Thank you so much for summing that up. It is definitely the TL;DR for me. So very interesting to learn about this. And from there on, I just want to jump onto the very first question.

How do you address the challenges of ensuring the reliability and accuracy of AI models used in test automation, right? Especially when it comes to a dynamic and evolving software environment, because definitely it's changing, right? As we speak, it's changing.

Manish Jha (Senior Director - Product Head, Avo Automation) - Absolutely. And that's a great question and a great first question, I would say, with the first one. Because now this would have actually created some curiosity with the audiences around, OK, you're using it, but you must also be facing some challenges.

So again, if you see this, it's a multifaceted approach in how we are basically dealing with this. The strategies that we are following internally on how to get relevant and being relevant in what you're practicing, number one, I would sort of put it as and validate.

So you might use existing some of the LLM models or certain free data sets, but evaluating those models in your testing environment, that is one variable, right? Because they have been tested in maybe in some other situations and some other use cases.

So how do you plug that up and then sort of marry that with your own use cases, number one, and then we see whether the results that you're expecting are actually coming out or not? Then secondly, what we are also doing is that we are playing around with the accuracy and the effectiveness of it, and we are trying to gauge that.

It might spit out some results, but was it 50%? Was it 40%? Where exactly it is, that is something is an area that we are definitely constantly analyzing. So I think with that, what happens is that it feedback and trains back the system, whether it is, if it is not something that we are expecting or, let's say, if it is something already what we're looking at, right?

So I think that is very important data for any company or any organization, I think, or practitioners that they should be doing it, which is what we are also doing. Second, I think area I would probably put it as maybe the dynamic way of how you train your model.

I think it's important because, again, if I come back to the same example, that would be situations or keeping some references in mind, right? Whether you're taking any other model, for example, entropy, coher, whatever is your use case.

When you take it to your actual playground and you know your applications, what you're talking about, your application is continuously evolving, right? So whatever it was in the current sprint today, for example, and how you're positioning your product six months down the line, it might look very different, right?

Today you might have maybe, you know, from A to B, but your roadmap says that you have to go from A to, let's say, C. So that means your product has to evolve, that means it will have furthermore bold-ons, and furthermore pillars on top of it and then it might shape up something different, right?

And I think keeping that in mind and then sort of training back your, again, how your software is evolving, I think it's a very important and a continuous learning aspect that has to be done your training has to be also back to the system quite adaptive that way.

So I think that's very important in terms of being relevant and effective in what you are expecting and keeping it constant in tandem with your product development. That was my point. I think when it also comes and I talk about feedback, one is the product part of it. I think the third part is also around the humans, right?

So your testers who are testing it, your developers, and I think they are your right stakeholders also at the end of the day, your product managers. So I think when they see that, I think they are also a qualified stakeholder in saying whether this was the right data, wrong data, the output was actually doing the job or not.

And I think keeping those also as an input mechanism and training it back, I think sort of as little human touch to it, right? It can't be just the output, and then you are feeding it back with certain numbers and metrics and saying, train it again and then give me another feedback.

So I think that's another area, and that's what we keep doing week on week. We sync up with engineering, customer success, and customer product team and say that, you know, this is what we've got. Do you think this is relevant? I think very important part, and this is something that has generally come up in most of my conversations with even analysts as well.

You know, like so Gartner or Forrester where you have to keep your, you know, human in loop logic. You can't sort of bypass thinking that these are smart systems or they will do my job. I think that if we do, I think it's quite scary because you don't know how it might end up. Right?

So I think keeping in, you know, in human in the loop, I think is the very, very important part. So you need somebody to approve that in order to sort of go to the next level. So I think that's where I would keep and that's how we train our own internal engineering processes of using these technologies and keep evolving from that. I hope I sort of painted the picture about methodologies and everything that we're doing internally.

Kavya (Director of Product Marketing, LambdaTest) - Yeah, I mean, absolutely. I think you nailed it. Thank you so much for the insights, Manish. What stood out for me is how everyone's debating about humans and AI coexist. And you rightfully said that humans have to be in the loop.

And even when you were explaining the points, I could sort of imagine how it's a collaborative effort between various teams that is sort of leading up to constant iteration going on, this constant collaboration.

There are multiple layers of evolution that are happening. And on top of all of this, of course, there is this coexistence of AI and human validation that's sort of required for sure. Yeah, so super interesting to hear that, and that brings me to the next question.

Can you please elaborate on the integration of AI-driven test automation with continuous integration and continuous development pipelines, basically CI/CD pipelines, and how this integration sort of impacts the development workflow?

Manish Jha (Senior Director - Product Head, Avo Automation) - Sure. So I mean, it's a no-brainer that enterprises were trying to go into this area. They are following or trying to at least adopt CI/CD. One area where I generally sort of also add to this terminology is CD.

It's actually CI/CD, and CD because your integration is done, and development deployment is done, but testing, what about that? And this is what Avo or players like us actually do. So certainly, and I think one area where I think that Avo plays a major role in having CI/CD integrated is basically around.

So when you think about that, I'll take a step back. There's a pipeline, and there are codes that are being merged into it. Right. So one particular example definitely is that you know, the moment it gets merged, whether it's a branch-level code or main-level code, your test automation script, which is linked to it, should be triggered. Right?

So I think that's definitely very, very important from a CD perspective because you don't want to sort of go to the old school practices of building it up, packaging up your build again, and giving it to the testing team. They will take another week, 10 days again come back with the feedback, right? The way you have 14 days, by the way, of agile development, a typical sprint runs for 14 days. In those actual productive days, there are actually 10.

So you know, in that only you have to develop, test, everything and then release your product. So that luxury is gone any which way. Now coming to the reality where in that short span, you have to do all of it, you can't have manual intervention needed at that time.

So that's why this infrastructure, first of all, is very important, and secondly, I believe that the kind of test that is being run, whether it's integration, regression, or any type of automation test has to be triggered immediately. So point number two that I would like to associate with this is basically the feedback part of it, right?

So you want your developers, the moment they have entered a certain code, immediately they should understand whether that ran well, whether it was expected or not, whether the acceptance criteria for that feature was this, and then did it expectedly behave in the same way or not? I think it's very important.

One, it saves time. Second, it, of course, gives you an instant, you know, sort of report card on how your build is shaping up, right? And as early you do it, I mean, it's not news for anybody, but early detection of bugs or issues in the system helps you downstream activity in saving a lot of millions of dollars, honestly, right?

So I think that is definitely an area, you know, in terms of faster feedback loops, if I say, the third, I think what we have also seen is in terms of the test coverage. Now, test coverage, as a word, can be a little misleading.

I mean, it's my personal opinion to it because, you know, your coverage, whether you had got, let's say, 400 or 500 test cases, and you executed it, and you say, Hey, yeah, I've done it. These are the past failed scenarios, and we have covered them. Right?

My honest opinion on this is that actually there should be the right element of risk-based test coverage, meaning if something has been developed newly or changed, let's say, in an existing world, and then that was, let's say, corresponding to one or two test cases, which was not, let's say, covered in your regression or anywhere else.

Then irrespective of 500 or 400 test cases that you have tested, if those two were not, you are at very high risk. So your coverage number does not give you any real picture about it, right? So I think those are the areas where AI model and then infusing CI/CD, if you see can give you a very clarity in terms of, okay, what should I have been actually focusing on?

Actually, I was wasting time in probably regressing these 500, which was not even needed because those were not affected in this release, I should have focused on these things. So I think that smart system of diverting attention to what is important versus what's okay to ignore, I think that is something which is also in Naivo we are trying to solve, right?

It goes without saying again that it helps in your resource utilization, and your bandwidth, and you can use those times in a better way and more productive way. In the things we are also seeing in the future, where it's in flight for us, of course, very early days, but how do we make it predictive maintenance? This is a new terminology we are trying to coin in the market.

But you've got historical data, right? You've got all your input from the pipelines. You have seen all the raw data of pass and fail scenarios, and a lot of other metadata also. If you marry all this information, how about sort of working on that, okay, this is where the most potential failures are going to happen.

You know, at the time of insertion of those codes itself, I think that level, if we switch our gear to that, I think a lot can be furthermore expedited in terms of the product development and furthermore visibility to the early issues that can actually be sort of traced and spotlighted.

And end of the day, of course, since it's a part of entirely streamlining the testing process, it's a very important part of the testing process you know, from an enterprise perspective, what are the main goals? The main goals are, how do I go to market very quickly? How my standards are very high in terms of quality?

Hope my customers are not finding it, you know, quite glitchy or buggy built. And the end of the day, my NPS or whatever school they're following should be good. So I think that's where the motive of keeping the enterprise's goals in mind and how these don't come in the foreground, because this is not.

These are not the products that are customer-facing, right? But these are the pillars of how you achieve those goals. So we are the enablers, I would say. We are on that post helping enterprises to build their quality software. And that's the methodologies that we are following.

Kavya (Director of Product Marketing, LambdaTest) - That's quite insightful. Again, thank you for sharing. I mean, of course, for enterprises, faster time to market is definitely the key when it comes to, you know, enhanced development workflows and all of that trade. And interesting to hear about how I was ensuring AI has been seamlessly tending into CI, not just CI/CD, but also, you know, the continuous testing execution part of it.

And yeah, the end goal for all of us is to make the lives of testers and developers easier. And yeah, as I said, good to see that Avo is evolving to keep pace with all the rapid changes that are happening in the testing and development space.

And talking about enterprises, you know, that also brings me to the next question. Can you share some insights and specific examples from Avo's product development journey where AI-driven test automation has led to significant benefits for enterprises?

Manish Jha (Senior Director - Product Head, Avo Automation) - Sure. So one of the Gen AI features that we launched, though it's still in beta right now, will be productized next month in the full general release. When we were doing this, that feature, by the way, Gen AI, was all about your requirement for your test case creation.

We evaluated the traditional process of when you see a requirement, what is the process of first of all capturing that requirement, then of course, how much time does it take in the KTs or handover processes, then there's a reverse KT sometimes where people understand, hey, this is what I've understood, is it right?

And then they go about writing their test cases and all that. When we replace that with Avo, we really have found out that you can slash your overall current of the manual work by 50%. Now, yes, there is a caveat. People might argue that you are depending upon a machine's output of generating test cases.

So what if I added junk data in terms of input, and then you're getting a junk output? But then again, it can also be debated in the sense of the intelligence of somebody who's doing it manually. How much have they understood? What is their interpretation of a feature?

So it's quite an open-ended point, there is a review cycle, of course. You will have certain syntaxes for the way documents are done. You will have certain acceptance criteria high level. You will tell this is the API requirement, or this is the middleware requirement, or DPE, or the front end.

And we don't need much meat in it, honestly. We just understand the skeleton of if you have given these things, it's in plain English. That's it. There's no complexity of, oh, you need to write it in Jerkin language or some other syntaxes. You write like a novel and we will understand the requirements, and we'll probably convert that into manual testing.

So that's one area. That's where we have seen a significant reduction and we have seen now teams been utilizing better areas, for example, data interpretations and then BI side of things that, you know, how was the result? They can actually analyze executions of the last six results and be more proactive about where things should be fixed, right?

And then have further downstream activities between engineering and QA conversations. So that's one. The second area that we have also seen is basically, you know, in the sense of smart regression. Now, and this is another term when I told you in my previous point that if I tell you that out of 100, you just have to focus on these five, that means what I'm telling you is that, you know, just defocus on this, what you've been doing regularly.

But focus on only these five so that you are able to achieve the same result with less risk covered, and your coverage is still high. So with that, what I have seen is, and when you actually execute that, there are two actually plays of the features here. One is, of course, when you run things in parallel, it definitely distributes the load and then goes about it, and you know that LambdaTest is pioneering that.

So we'll not talk on that much, but consider that feature and then tell what to test, I think combined is a very powerful value proposition. And we have seen that the results are coming faster than they anticipated.

So if 500 test cases took six hours for you to get the results and know the blueprint of what your status is for an application health status, now you are doing only 100. That means you are actually telling them much in advance with better content and a focused report of what things have been done.

So that way, the team has been more proactive in fixing things, in understanding the cracks and the other areas where things have not gone well. And then, again, this has helped enterprises in focusing, okay, should we think about enhancing this feature or should we really think of maintaining this feature better and keeping it more robust so that then we move to the next level?

So these are some of the downstream benefits that organizations have actually benefited from. Right?

Kavya (Director of Product Marketing, LambdaTest) - No, thank you. I mean, that's a very compelling pitch as well because, essentially, you are freeing up the testers to help them focus on what's sort of important, right? And yeah, very curious to try my hands on what you mentioned about the test generation part of it. I mean, the execution part of it too.

And yeah, interesting to hear about the smart regression features as well. Yeah, thanks for shedding light on Avo's product journey and mentioning how AI is essentially reducing the manual effort while also boosting the test execution part of it.

Great, so that also brings up the next question. You did mention the collaboration between Avo and LambdaTest. How does the partnership between Avo automation and Lambda test enhance testing capabilities for organizations, and what specific benefits can users expect from this collaboration?

Manish Jha (Senior Director - Product Head, Avo Automation) - Sure. So again, my unbiased view of this is very clear. For things that are there in the market through certain partners, I don't believe in redoing it or re-engineering it at your end. If there are champions in certain areas, for example, LambdaTest, you have got HyperExecute and many similar, and a lot of different other visual testing capabilities, etc.

So if I'm partnering with Lambda, why would I do this internally before your partnership, we had parallel execution and all that, but that's for a separate use case, which is a more complicated tech stack where it was required.

But coming to your point, see, end of the day, if I take seed from an enterprise level, quality is paramount, there's just no negotiation about it when a product has to be, say, certified by QA and QC groups, it is almost expected that there should be no issue in the product.

But as we evolve in the methodologies of how the product is being developed, we're talking about scale, agile, we're talking about agile, and so many can, man, blah, blah. And club it with the actual moving parts of the system of an enterprise.

And when I say moving part, I'm saying think about a large enterprise where they have several components of front-end applications, back-end applications, API layers in between, the database part of it, and the cloud part of it. Some of them are on-premises, some they're trying to migrate. It's a very messy world.

It's not that straightforward. And you are telling somebody to test that system, asking them, hey, do you have the confidence to say this will be bug-free? Right? So, the point that I'm trying to make here is that, you know, even though, you know, things are there out there complicated.

I think partners like, you know, a test automation platform like us, you know, a platform like yours, Kavya, LambdaTest, I think brings up that bridges that gap basically where if you have those different permutations and combinations I'll bring that use case also what I'm referring to, but then there is it becomes like a one-stop solution, isn't it?

Like, okay, I did my job here in it, Avo automation for automation purposes, but okay, I need the scripts to run in multiple browsers or multiple OSs. Now, how do I do that? If I start creating a device lab, I don't know. When will Avo establish the business on that, even if they're interested or not? But I see Lambda doing that quite well. I'd rather partner with them and help my customers do that.

And the model in which we are working, all SaaS-based, use as much as you want to. I think it sort of also breaks the barrier of, you're not confined, right? And that's my next point. Imagine if you have a customer, Kavya, who is a LambdaTest customer, but they wanted to move to Avo for some XYZ reason, and they've been using some other tool.

If they see that we are already partnered with you, how seamless for them it would be to partner by saying that, oh, they already have Lambda as an execution platform, let's say channel partner. I don't have to find out or worry about my, do I have to change the setup? What do I do, or will Avo be able to do that or not with LambdaTest?

So from a customer lens also, it sort of helps a lot in that one seamless journey. That's what I see. And honestly, without even shying away, I think the cost part of it, the effectiveness of scalability and all that, which, you know, how do you scale your in-house testing to add that environment?

I think that USB is something definitely to be tabled around and that's what I think brings the unique value proposition when I talk about Avo and LambdaTest as a joint value prop.

Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Manish, that really helps. And just to add, we are excited about how the partnership is sort of shaping up. The end goal for all of us at the end of the day is to elevate testing together. Right. As you said, we want to ensure that customers are having a seamless experience.

And I think our joint value prop sort of underscores the commitment that we're making towards testing when it comes to innovation, building comprehensive solutions for the customers at the end of the day, right?

Great, and moving on as we look into the future, what are some emerging trends in technologies in AI power test automation that we should be excited about. So one thing I mean, I'm sure, you know, I might want to broadcast this news via this channel, but you know, we were featured in Gartner, you know, market guide recently that got published two weeks ago.

And it's a very interesting, you know, document if you read about Kavya in general also to be aware of how the trends and these industries revolve. So there are certain bites from that, which I'll definitely bring out and some of my own opinion and what I do in my own research area that I've been seeing.

And like I said in my previous point, this is a very complicated ecosystem of businesses, where enterprises are dealing with multiple apps, and multiple technologies. Some are in the old stack, some are trying to modernize, some are moving from this to that, etc. In this entire tug of war about what should be the right technology set, etc.

We forget that there is a business element of this in the entire ecosystem. Who are the ones who are driving this thing? When I say things, whether it's revenue, whether it's sales, or actually practitioners also, as you see, the demand that we are seeing these days is that, can Avo actually support business analyst type of personas? Can they automate?

And these are some of the new trends that we've been hearing. When I bring in that element about a manual tester able to automate that, or a product or a business analyst able to come to Avo and automate that, that means that how so ever complicated background algorithm we have, you have to simplify that from a front-end perspective.

Otherwise, you still are saying you are a no code, low code, whatever platform, but your approach towards automation is still quite messy or complicated, it requires a skill set. Right? And when I say that, then what is the point of us evolving from a Selenium era of days to no code error of day when the time or amount in understanding how to automate is taking the same time or the learning curve for the user is the same time, right?

So where I'm stressing the point that I'm coming to is that we have to keep that end user or that business user, you know, a hat always in mind on how to, how do you project anything on the UI, right?

So with that, the first thing that I'm trying to come up with is I'm trying to make the point I think about the way test cases have been authored. There are multiple companies doing in multiple formats and they're quite innovative in their own sense. But I do see a trend in the NLP based on authoring would be one area that would pick up more.

And the reason I'm saying is that because it evolves zero skills, it requires, when I say zero, I'm saying near to zero. Then secondly, it does not require, of course, any programming language and how you write your own, let's say an essay in English is how you would be basically writing your automation.

So I think that is definitely going to pick up. And I see that with an agile team where everybody wants to contribute to QA, it's not just one person's job, right? So, for example, if your designers are evaluating certain things, or say your manual test cases are written in manual way of whichever format, you can still tell your business users that go ahead, draw in this canvas, and see that can you automate a simple thing in English and why not.

Similarly, I think the other aspect and this level where I'm talking about contribution is a little technical But your testing systems self-learning testing systems, I think System has to automatically understand and detect where the failures have happened and analyze these results always as in when the results have been churned out, and It should be intelligent enough to at least pinpoint. What was the root cause of those failures?

I think a lot of this thing is happening manually, or you are exporting that data to a BI tool and trying to make sense of it. But within the test automation tool itself, if you're able to provide that, I think you, and that's where again, the interpretation of data can be looked at from a project manager perspective, your business owners, and many people in contribute that way.

So I think that part is very important, and that sort of extends toward, of course, which I will not touch base because it's quite already a common topic, but goes without saying that you need to think about those areas also where AI would be furthermore getting strengthened or evolved. Right?

The second last point I would like to talk about is predictive analytics. I spoke about that a little bit, but, uh, let me give you a little more background to that. So when I say predictive analytics, it's not just telling or pointing teams or enterprises in the right direction that this is what has happened. How do you evolve to that?

Because you are actually commanding a statement to your business owner or somebody that you go and find out fault there, right? Now that is a quite, I would say a bold statement to say, unless you're not very confident, whether it's manual work. If somebody asked me that, you know, are you very confident that there would be a guard at the main gate of my office?

I'm not sure if he's out there for a tea break or gone for a smoke, or I don't know, right? So the point is that, and how do you mature that model, right? So it needs to sort of feed onto a lot of data. For example, your support data, right? How many tickets have been raised in the last six months?

It also talks about your change request. That means the requirement that you went through was not probably accepted the way it was. And then it underwent a lot of cycles. And then today, you know, X days afterwards, the feature's launch, it's changed to something else. Then again, your observability-related information, right, you monitor your application.

So where did the spike-up of DB happen? CPU usage is all that. Imagine there is so much rich data, it's like all gold mine. And when you club that and then tell them marrying all these data points into a certain data lake, and then tell them, we have found in last six months when this has happened, then this is where your utilization has happened.

This is where your API layer basically got cracked. Hence you have a hundred-odd bugs in the middleware and not on the front end. Now this is where they would be very interested to know where currently most of the players are talking about your test cases fast, fail, you know, how many test suites were okay to approve and not approve. I think this is where the focus would be going, of course.

And the reason I'm sharing it's not secret anyway, many people are talking about it. And I really believe towards the end, how I'd like to conclude this point is that, you know, earlier it was a lot of manual work. Then we are talking about automation, right? Probably keeping humans in the loop and all of that.

But eventually we don't know, five years, 10 years down the line, a lot of this will be autonomous also, right? You are just informed, maybe keeping in the loop, but you don't have to action upon it.

So that's how I see the trend changing, you know, from manual to automation to autonomous, and a majority of the backend of this entire pillar would be how you mature your models and AI models.

Kavya (Director of Product Marketing, LambdaTest) - I was absolutely captivated by all the various points that you had to share. Very insightful perspectives, Manish. Interestingly, I also had a discussion recently with Nithin. We spoke about user experience testing, wherein he spoke about how different teams are involved in creating the user experience framework itself.

So your points about how different team members, be a designer or a business analyst, for instance, right? How they have to, at the end of the day, they all have to sort of test it out, you know, and when you were talking about it, that's exactly what it made me recall. It's about how, you know, you can make testing more accessible for various teams.

And the second thing that sort of stood out for me was, of course, the predictive analysis part, you know, forecasting the future outcomes that are essentially going to really help not just users, but from a scale perspective, even the enterprises. So yes, absolutely.

Manish Jha (Senior Director - Product Head, Avo Automation) - Because everybody is trying to save money at the right end of the day. Right, so if your tool is not helping them save money because you are they are paying already a licensing cost to you, and we're not helping them save in any other dimension, then it adds up to your cost end of the day. Right then, what is the ROI that they should be thinking from our side what is the TCO? So yeah.

Kavya (Director of Product Marketing, LambdaTest) - Yeah, absolutely. As you said, the future of AI power testing definitely sounds promising. Scalability, reliability, and of course, the cost effectiveness of it all, these would be, I think, the three pillars for us to ensure we are providing to the customers at the end of the day.

And coming to the very last question of the day, what advice would you give to software development teams that are aiming to leverage AI to enhance their test automation strategies?

Manish Jha (Senior Director - Product Head, Avo Automation) - Sure, sure. If I'm qualified to give advice, I would definitely give it. Again, these are some of the self-learnings, Kavya, where I see how we have matured or what precautions or areas that we focused on have yielded some results, right?

So I'm going to talk more about the approach part of things, to sum up the advice. And that people can make sense of it already. So how we started off when there was a buzzword was AI, Gen AI, and customers asking, do you have anything to show in your product?

So yes, we jumped onto this bandwagon of AI, Gen AI because of the pressure of the market and the customer that we have to show something. But that aside, of course, we had a strategic planning about your monitoring roadmaps and other areas that how we should be doing things more smartly. That was the idea.

Now, whether AI comes into the picture to do certain things or it's a different aspect of AI, whether machine learning, etc, that's a technical discussion to have. But we definitely had a very clear goal in mind. Number one, I would say, you should definitely have a very clear goal in mind because, see, there's an X thing that an organization wants to achieve, right?

An organization has a lot of issues, right? You can't solve all of that. So you have to think about that, you know, what is the most problematic area that you would pick? Keeping that goal in mind is if you start off your journey, I think it would sort of all the noises that you will keep hearing during the course of that journey of how you develop or go about that idea. That's number one.

Second, I would say is in that journey, find out the right spots. Okay. Now, you know, we go on a road trip, for example, right? There are multiple routes in which you can go, but you choose the most optimized shortest where probably you have, you know, most of the dhavas and etc. personal choices, of course, but it crafts you a tailored road for you that this is what you will choose.

Similarly on any journey from an enterprise perspective, if you're one of the spots is that we are spending a lot on manual work. Okay. We are doing a lot of repetitive work, right? Pick that out on that goal. So now, for example, from the first point, when I said you have to have a clear goal, I will probably pick, I want to have my risk coverage a hundred percent after six months.

Now from that, my point number two about the right spots, finding out the right spot, I would do, okay. I will call all those manual works in order to achieve that. Third, I would say that I think these are quite overwhelming technologies also. So I think having a very deep understanding is also very important.

You can't go away with having a YouTube video that you have seen or if you have done some webinars or sessions and then you think you are an expert of it. I think a deep understanding of these technologies like machine learning, natural language processing, and computer vision is very important.

I mean, whatever is relevant in the goal that you're trying to solve, you identify what technologies and I think that is very important more or so applied knowledge I'm talking about, not the other way around. So that is one area I would say.

So that only then you can champion, because unless you don't put your hands in the mud, you don't know what's inside. So that's one I think one more point which I can probably add is that you should be experimenting and iterating a lot. It is a standard practice of any POC how do you approach or any new product development also that you do.

But it also fits into this context because like I said your product is changing, your data is changing, your environment is changing, how you started your AI journey or these technology adoption in certain, day one of the month versus day 30th, things are changing rapidly overnight.

So I think iterating that and feeding back into the system goes to my earlier points also. I think it is very important so that you are relevant with what you are starting off your project with and on the middle of that also, you're still relevant.

So I think that's important. And again, like I said, monitoring and adapting is another area which should be looked at definitely. What are your key metrics? Is it really making some sense? So while I'm doing all the checklist things and doing this and doing that, I'm doing this, what all I've said, but my manual works are still, let's say, you know, 20 hours, which was the initial stats.

That means something is not right to me. So I think quantifying those, having those metrics is very important you know, whatever was the spot that you were trying to solve with the redundancy of the work or reputation of it, it has to be translated into a number.

And then when you adopted these technologies or methodologies and then trying to put it into the process, if it is not giving you drastic of the results, that means something is not right. So I think that monitoring is very important because otherwise you might be very adventurous and keep on going on experimenting.

You're burning certain dollars of companies time and energy and everybody, and you're actually getting nothing out of it. So I think that should be the fence within which you should be playing around when you are embarking on your journey in these areas. I think, yeah.

Kavya (Director of Product Marketing, LambdaTest) - Absolutely well put. Yeah, no, I mean, definitely great advice. Thank you so much for sharing that, Manish. So what sort of stands out for me is that we should just keep on digging deeper, look at the foundational layers, right, when it comes to AI transforming the test automation strategies at the end of the day. Thanks once again for these insights.

And we also come to the end of today's podcast or show, Manish, I would like to thank you for being a part of our XP Series, where we bring the latest trends, innovations, and technologies and conversations around it. Thank you so much for sharing your journey.

Thanks for sharing so much for the insights about all the fantastic work that Avo is doing. And as usual, as I said earlier, you're very excited about our joint partnership between LambdaTest and Avo Automation.

And those of you who are listening, please Subscribe to LambdaTest YouTube Channel for more XP episodes. Thanks, Manish, once again, it's been a pleasure hosting you.

Manish Jha (Senior Director - Product Head, Avo Automation) - Thank you, Kavya. Thank you for inviting me. It was an absolute honor to talk to you and share, you know, this conversation was also quite exciting. Thank you so much.

Kavya (Director of Product Marketing, LambdaTest) - Yeah, absolutely. Have a great day, everyone. Thanks once again, Manish.

Past Talks

Mastering User-Centric Mindset: Unlocking Your Potential as a TesterMastering User-Centric Mindset: Unlocking Your Potential as a Tester

In this XP Webinar, you'll delve into mastering a user-centric mindset, unleashing your potential as a tester. Explore innovative strategies to elevate testing approaches, delivering exceptional user experiences that propel product excellence.

Watch Now ...
Future Trends and Innovations in Gen AI for Quality EngineeringFuture Trends and Innovations in Gen AI for Quality Engineering

In this XP webinar, you'll explore future trends and innovations in Gen AI for Quality Engineering, discovering how advanced technologies are reshaping QA practices. Gain valuable insights to optimize your approach and stay at the forefront of software quality assurance.

Watch Now ...
Flaky Tests from an Engineering PerspectiveFlaky Tests from an Engineering Perspective

In this XP Webinar, you'll learn how to mitigate test unpredictability, optimizing development workflows, & enhancing overall product quality for smoother releases and better user experiences.

Watch Now ...