XP Series Webinar

Future Trends and Innovations in Gen AI for Quality Engineering

In this XP webinar, you'll explore future trends and innovations in Gen AI for Quality Engineering, discovering how advanced technologies are reshaping QA practices. Gain valuable insights to optimize your approach and stay at the forefront of software quality assurance.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Rituparna Ghosh

Rituparna Ghosh

Vice President & Head of Quality Engineering & Testing, Wipro

WAYS TO LISTEN
applepodcastrecordingspotifyamazonmusicamazonmusic
Rituparna Ghosh

Rituparna Ghosh

Vice President & Head of Quality Engineering & Testing, Wipro

Rituparna (Ritu) Ghosh, a seasoned Agilist and Servant Leader with over two decades of experience at Wipro, spearheads the company's Quality Engineering and Testing Practice as a Vice President. Renowned for her foresight in adopting emerging trends and technologies, Ritu fosters high-performing teams and promotes a culture of collaboration and diversity. She is dedicated to mentoring budding leaders and sharing her extensive knowledge across the organization.

Maneesh

Maneesh Sharma

Chief Operating Officer, LambdaTest

Maneesh Sharma, LambdaTest's Chief Operating Officer, boasts over 24 years of industry expertise, driving GTM growth and operations. Formerly, as GitHub India's Country Head and General Manager, he propelled it to become the world’s fastest-growing developer market. Maneesh's leadership spans renowned firms like Adobe, SAP, and Sun Microsystems, where his role as JAVA Ambassador fostered developer engagement across APAC. Passionate about technology, he remains dedicated to software innovation.

The full transcript

Maneesh Sharma (Chief Operating Officer, LambdaTest) - Hello, everybody! Welcome to another exciting episode of the LambdaTest XP Series. This is part of our community initiative at LambdaTest, where we are bringing you the latest innovations and best practices in the field of quality engineering and software development.

I'm Maneesh Sharma, and I’m Chief Operating Officer here at LambdaTest. I’m privileged to be hosting industry veteran Rituparna Ghosh, who is the Vice President and Global Head of Quality Engineering and Testing at Wipro.

We'll talk about a lot of the testing best practices today, but the topic is going to be really interesting. Everybody's talking about it all over the world even if you're not in the tech domain, you are talking about Generative AI.

So today, we'll be talking about the impact of Generative AI in the quality engineering and testing world. I just quickly introduce our esteemed guest.

Ritu brings over 20 years of experience in shaping quality engineering practices. Her leadership at Wipro has led to significant advancement in software testing methodologies, fostering a culture of innovation and collaboration.

Ritu's expertise extends beyond the technical domain. She's known for her commitment to mentorship and fostering diverse, inclusive workplaces. She has been helping customers and teams all over the world embrace quality engineering and testing practices.

I couldn't have asked for a better guest on today's topic than Ritu. Thank you, Ritu, for taking the time and joining us here today.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Thank you for the invitation, Maneesh. And I'm really looking forward to an exciting conversation. Given all the buzz that Generative AI has been generating, if I can use that term, over the globe for the past couple of quarters.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - Yeah, absolutely. As I was saying, it's not only tech people, even my parents talk about, you know, what is this Generative AI? You know, kids are using Generative AI to submit homework.

Funny thing, you know, my daughter submitted her assignment, which had nothing to do with Generative AI. She didn't even use ChatGPT, but then the turn-it-in software flagged a lot of the content that she wrote as Generative AI.

So I think, you know, the discussion today will bring out testing of these models and what's been going on.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Definitely. It's a whole new brave world out there.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - It's exciting. I think there are these waves of innovation that come. There was this point in time when the dot com or the web wave came. Then there was the whole mobile application wave that came.

I think we are seeing a very significant shift in the whole tech industry and how we, as humans, are going to engage with software. It's definitely exciting. I would love to hear your thoughts on the questions that I have related to this topic.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Sure!

Maneesh Sharma (Chief Operating Officer, LambdaTest) - So, I have a couple of questions. I'll keep referring to my notes. I haven't memorized everything because I want to make sure I ask all of these interesting questions that we've got from a lot of our customers and audiences as well.

So jumping straight into it, Ritu. With the integration of Gen AI, how do you see the quality engineering landscape evolving? And what significant changes or developments do you see coming up in the QA field over the next few years?

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - I think it's for us in quality engineering, you know, we couldn't have asked for a better time to be in this industry. If you look at over the last couple of years, you know, testing specifically has undergone sea-change because initially, it was all about, you know, manual testing, and then automation started coming in.

Then we had a digital transformation, the entire digital wave when you were looking at, you know, Agile, DevOps ways of working. How can you get feedback faster?

So we said that you know, you've got a left shift testing. You've got to bring automation into testing. You've got to have predictive automation, and how can you have AI brought into testing, and then Gen AI comes in.

So for us, you know, over the past couple of years, we've got literally in our team veterans who've seen it from the days when it was, you know, huge manual testing teams to now teams who are just doing cutting edge stuff.

So we couldn't have asked for a better time to be in this, in this field and in this industry. I think generative AI, specifically is going to have a huge impact on testing specifically.

And I think testing is one of the low-hanging fruits, significant mainly because in the entire SDLC, testing has always been the first area that has been impacted or where innovation has happened.

So I think that trend is definitely going to be continuing. There is obviously a load off I don't want to use the term hype, but there are a lot of exciting things that are happening. So every day, and if you're on Twitter and if you follow some Twitter trends or Twitter threads.

At least in my Twitter feed, my top tweets every day are around, you know, what's new happening in generative AI, what has mid-journey dropped recently, what is happening with ChatGPT, etc.

With all of these things, testing becomes the first as well as the last line of defense. The things that you are creating using generative AI models are those fit for purpose. So that's where testing becomes important. You are creating applications on generative AI are those the right things?

Testing the LLMs themselves, again, is something that becomes very important. So those are some of the changes that I predict are going to happen. For testers, how do you really look at this new technology, new beast how do you, I wouldn't say, get it under control, but how do you equip yourself to use better things like prompt engineering, how do you do it in a more effective way?

Those are all the things that are going to happen as far as quality engineering is concerned. Obviously, there is this entire expectation that productivity is going to just zoom through the roof. You will not need humans anymore. Everything is going to be autonomous. Every desk will now not see an individual, you'll just have machines and bots scurrying around the entire place.

So that's the height of expectations. The next couple of quarters, I think, are going to be extremely exciting to see what really happens because I think some of, somewhere, if you look at the typical Gartner's hype cycle, the trough of dissolution is going to come a little bit sooner rather than later as compared to other technologies.

But then, once some of the glitches get sorted out, that's when the real fun will begin in terms of how do you really master this to ensure that you're able to get the maximum benefit out of it?

Use it along with the human that you have and really make sure that you have an output that is fit for purpose, the output that allows you to save time, allows you to become productive in ways that you possibly hadn't even thought about earlier. That's a little bit of the way I think things are going to move.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - Yeah, I think you're right. It's not going to be an overnight change and there will be that trough of disillusionment because everybody is expecting the world to change.

Applications will write themselves, tests will run on their own. That's really not what's going to happen. I still remember, and I won't disclose how old I am in this industry, but there was a point in time when Netbeans, when it was launched, you could actually drag and drop applets. Right?

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Yes.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - It'll generate code, and with the click of a button, it will create an EJB deployed into an application server. And then you can see a hello world on a webpage. And it was like, Hey, you know, then you don't need to write applications. Everything is going to be drag and drop, but we know how that unfolded.

So I think similar things will happen today. I think you touched upon a very important point, which was testing these LLM models because that's the base for all of these generative AI solutions.

And I remember, I think, a couple of weeks back, it was all over Twitter, ChatGPT overnight just started, you know, spitting out gibberish, right? Nobody could understand if it was English, Spanish, or whatever that was.

And that got me thinking, hey, who's testing these things which are going on. So what's your view on sort of creating these testing models for generative AI itself?

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - I think that's going to be extremely important. If you look at what happened with Gemini, right? And I was reading a blog and it was very interesting. And I came to my team and I told this to them that the blog actually has saying that, you know, couldn't they invest into a couple of testers who could actually test their output?

So I think what will happen with a lot of the LLMs is that you will forget the hallucinations, which we know are going to be there. So you will have to ensure that the way you're training these models has to be, again, and I'm going to be using this term quite often, I guess, fit for purpose.

But I think what we don't realize is the way, if you're not very clear about the strategy that you have to train these LLMs, you can inject a lot of bias, which will then screw up the output that you get. So I was, and this is an old story, this is not really generative AI-related, but it's more of an AI story. I'm not taking the name of this organization.

So they were doing this entire recruitment drive, and what they saw that, and they use that past data to train their model. And because they had, till that time, only got profiles from men and that to white men, their model got trained to actually reject a lot of profiles that did not fit their criteria.

So we start with the right intent, but if we are not very clear to watch out over what's happening, the outcome that we get can be completely different from what we started with.

And I was discussing this with my team earlier this week, especially if you look at STEM, right? The kind of disparity that exists in terms of gender diversity, it's not a secret that women in STEM are not a common sight as yet.

Of course, it's a lot more common than it was when, I guess, you and I started our careers, but there is a racial divide, there is an ethnicity divide, and when you train your model, all of those biases will get in. You can actually get it into a model of garbage in, garbage out.

So how do you ensure that the way you're training your models, you have the A, the right people who are training your models, and B, the output that is coming out? You have to have somebody who's looking at the output and say, Hey, you know what, this makes sense or doesn't make sense.

Again, for us as quality engineers, it's great, great because I think we will always have a lot of things to do. And the way I look at it is, you know, assurance is a term which is very closely aligned with quality engineering.

We are the first and last line of defense for the customer to ensure that he doesn't get a call in the middle of the night saying, Hey, you know what, P1, your product has crashed, or you know, it is like ChatGPT throwing up all kinds of gibberish, etc.

So testers are definitely going to become very, very important. We will have to figure out ways and means in terms of how are we going to evaluate the output which comes in. How are we going to say that, Hey, this is right, this is wrong. And that's where this entire thing about you will not need testers anymore.

On the contrary, you will need testers who are a lot more evolved, a lot more mature, a lot more groomed into the ways of this specific domain, which is, you know, quality engineering. So they are actually able to look at the output and say, yes, this is right, or this is wrong. So I think that's what we need to work towards.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - I think there will be a new discipline of testing called bias testing which might come out. How do you test for these biases across different applications, anything can happen.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Very true, very true. And, you know, we keep on talking about prompt engineering. You will have to do an assurance around that because what's to stop your developers or what's to stop even your new testers for creating prompts, which, you know, ends up consuming so many tokens.

So whatever you thought you were saving, you know, from a cost perspective, it actually becomes something that you had not even imagined. So those are, I think, the testing as we know today, there are going to be new roles, more evolved roles, which will come in.

And as an industry, what we will have to figure out is how do we create those kind of, those kinds of people? How do we create those mature individuals who are able to, from day one, look at the output, because the output, which is coming out, the model is getting itself trained on industry data, right?

So I have to have somebody who is mature enough to look at that output and say, hey, makes sense or doesn't make sense. It's almost as if, you know, when, when I was in school, I first learned ABCD, I then learned how to construct words, then how to construct sentences and then create an essay.

And, you know, if I had stuck to stuck to writing, maybe I would have created a novel, you know, written a novel. But now, I'm going to be in a scenario tomorrow where I don't have that initial grounding. Somebody throws a novel to novel at me and says, hey, you don't tell us, is this, is this good quality or bad quality?

So how do I get myself the initial training, which I had in school in the absence of that school? How do I get at that level? So how do we train the tester of tomorrow? Because the industry, you don't, you know, we are all saying no more manual testing, no more manual testing, but manual testing in its way, it was very useful because it gave people that scope to learn, right?

Even if you say that, you know, more and do more of exploratory testing or AB testing, that's going to be manual, everything else is going to be automated. But still, you know, you write automation frameworks, teams are still able to, you know, get themselves, get themselves trained on that. But if I remove all of that and say, your tool is going to give you everything. You just say, is it fit for purpose? Is it not?

Is it right? Is it not? Does it cover all the scenarios that it should have or has it not covered? If you use this output, do you have 100% coverage, or do you not have 100% coverage? And I, who don't even know ABCD, are not supposed to make those decisions now. Interesting. So that's, I think, the challenge that we'll have.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - Yeah, but you're right. I think the fundamentals won't change. Understanding and upskilling on top of technology will be the most important thing for testers.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Yes, definitely.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - I think if I go back a little bit before Gen AI itself, we've always studied in software engineering that unit test is the most basic and most important thing that everybody has to do. And we know how many developers do unit testing, right? I know nobody writes unit tests.

So, you know, I was lucky to get a peek into Co-pilot and Gen AI a couple of years back. One of the most interesting things that I saw, Ritu, was how Gen AI models can help write unit test cases.

So I think this whole Co-pilot model is really good because it will boost productivity, as you said, so that, you know, you can write more code, and the unit test cases get generated. But I think there's also the other side. You talk a lot about the manual testing.

When I'm talking to customers, I ask them, hey, why are you still doing manual testing? And one of the answers that I used to get was, hey, because we've not been able to find a lot of talent who could come and write automation test cases. There was always this gap from a scaling perspective.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Yes, right! Absolutely.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - And I do see Generative AI starting to help, at least from a baseline perspective helping you get that starting point, that template of writing these test cases. Do you see that happening a lot?

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Yes, that is definitely, you know, if I let's look at the developer and the tester both. So one of the biggest things that I think test developers or engineering teams today don't take enough of advantage of which is the entire thing around test-driven development. I mean, it's such a fantastic philosophy.

People don't do that TDD, and when we talk to clients, and one of our clients told me that, you know, if my developer writes unit tests itself, I'll be very happy. So yes, I think that definitely is going to happen where the developers are going to be able to do that. And the entire skills issue, which we had, because, you know, you don't have enough people.

So here, you have this entire system, which is creating a lot of these automated tests and it's like a marriage made in heaven. So in fact, that's one of the things that we talk about saying that why do you spend time in doing that? Here is something which is available. Look at a user story is just going to be throwing up the test scripts, etc.

But as I said, you will need to ensure that you have the human in this entire loop who is checking whether the output is wrong, right, or wrong. So even if I assume that garbage hasn't gone in, you will have some garbage along with the good thing, which is going to be coming out.

So separating the wheat from the shaft, so to speak, has to happen. And the humans will have to get in there. So the entire thing, which clients face that, you know, I don't have enough people doing the automation, you know, doing the test automation that will definitely get addressed.

Today, I have senior folks who can look at, you know, look at the output. But tomorrow is what I am more, you know, and as a leader, I am more concerned about because tomorrow the thing is going to be, I don't have enough of people who can really look at an output from a Gen AI, you know, a solution and then say whether it is right or wrong.

And then if you have a situation where somebody looked at the output in a wrong way, and it created a huge mess and in our industry, you know, today with the way the internet has pervaded as well as Twitter, bad news spreads so fast. You know, Gemini within, I think, within a couple of hours, everybody knew what was happening.

And if that happens and that is going to happen, we run the risk of the entire thing going the other way, saying that forget about it. Let's focus on what we knew, tried and tested.

But even if that doesn't happen, and as I said, over the next four or five years, how do you, so we have, if you look at the entire model, which is there in the industry, and I'm going to be talking a lot about the outsourcing industry specifically. The model that we have of talent is we take the talent from the colleges, we train the talent over a period of time.

And in the next 2-3 years, they really become mature individuals who end up adding a significant amount of value to the entire process because, obviously, in the first year of their coming out, they're going to be still in the learning phase.

In the next 2-3 years, if you get into this entire thing, I don't need a tester. I don't need a tester because my solution is giving me all of that output. Where do you have the talent flowing in? How do you address that?

And this is going to be having, I think, a multiple impact on the global IT industry as well, because globally, there is a lot of talent which goes from India. Plus, even in their colleges, in their universities, when they also have these people from colleges coming into work, it's going to be the same challenge that they will face as well.

But I think that's something which as industry leaders, all of us seriously need to think about in terms of how are we going to be addressing this? Not only in terms of testing, even if you look at development, right? Great, a good developer tomorrow becomes a great architect, right? Because you do not become an architect who can design, let's say a banking platform on day one.

You learn, and you do things on your own. You learn, you see how things work and then you enhance yourself. How are you going to get those great architects of tomorrow? If you are saying that, you know, it's only my solution is going to do, my tool is going to do everything. And then I don't need anybody. Interesting conundrum.

In fact, you know, one of the things that I think organizations need to also think through on this. And as organizations, I mean, the business organizations or the customers that we cater to, there is going to be, and that's something which we see today, whether it be a developer or whether it be a tester, there is a huge amount of cognitive load that's on these guys.

It's intrinsic load in terms of what's happening now as far as my domain is concerned, there's a lot of extraneous load, new tools coming every day, something called DevOps. I have, I have to do everything. I've been told you run it, you fix it, you do all kinds of things.

You know, I have to have 10 hands and everything. And then I'm really not able to understand business. So tomorrow, with GenAI, I think a lot of the cognitive load will go down because you'll have systems which will, which will do a lot of things for you.

So it will then give, whether it be a developer or whether it be a tester, it will give them a lot of time to really understand what is happening in the business process and then give their inputs and insights in terms of how can the business process be improved further.

How do you bring in more checks and balances such that, you know, what you finally go get out into the ship to the market is absolutely great. It will give them a lot more time that, you know, the kind of signals that you're getting when we say that, you know, build fast, fail fast, learn from the failure. A lot of them fail fast, and I've seen people saying that, yeah, I know it's a tech debt, but I need to hit the market. So that's okay. I will live with that for the time being.

So you will now have the ability and the time to really focus on some of those and reduce the tech debt that you have, which will have a multi-fold impact in terms of if you're reducing tech debt upfront, you'll have, or if you're not building up tech debt, you will build a lot more of a sustainable product.

So resilience, sustainability, green, whatever you talk about it. A lot of those are going to be, going to be improving as well. So I think some of those things are what organizations will have to think through because I think it's not only about, I'm going to do things cheaper. I'm going to do things faster.

I think what the bigger thing will be, I am going to build a product which is right first time because I'm actually able to do that fail fast and get those inputs and make those changes. So yes, I can possibly do it a little bit faster, not a little bit, a significant amount faster, but it may or may not be cheaper from that perspective.

But can I improve on my time to market? Can I ship into production earlier than what I was doing yesterday? Definitely. That's definitely going to happen.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - Those changes are happening, and I think you're in a very unique vantage position talking to customers across industries across geographies. I would love to hear what are the real-life examples of the quality world that customers are starting to experiment with.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - I think for a lot of our customers, there are some of our customers and a lot of this we are seeing in spaces where there is a lot of speed expected. So whether it be retail, whether it definitely insurance and banking where the market itself is changing so fast. So we are seeing a lot of our large customers starting to experiment with this.

Very interestingly, we are seeing a lot of our healthcare customers experiment with this. So whether it be, you know, having their own version of, you know, an Open AI or ChatGPT, etc, or, you know, building their own NLM, training it.

So there are different kinds of experiments which customers are doing in the testing space. We are now working with a whole host of customers globally. I mean, in Australia, in Europe, in the U .S. and the biggest thing which we are able to demonstrate to them that the entire thing, whether it be around a web application or whether it be even around COD products, there are multiple use cases that we look at.

You know, of course, test data management and test data generation are some of the most common use cases. The second use case, which is my personal favorite use case, is in terms of, you know, if you look at your user stories, how can you really have your test cases designed and use that to test your user stories and see whether things are making sense or not?

So I'll give you one example. This is a client on the West Coast that we were working with. One of the most common things that clients say is that shows us that it works. So we took this particular user story. It was a Coast product.

I think it was for an Oracle implementation which was happening. And it was a, you know, there was pure application testing. So we're testing the application. And we took that particular product and we generated around 150 test cases. And this took about an hour, hour and a half to do. And then we went back to the customer and we said that before we tell you what happened, how long did it take your tester to actually do it.

So the customer said that you know, for this particular user story, because it's a fairly complex user story, we were working on it for quite some time. So I had, if I look at the effort spent, it was about 10% days of effort with the, with the tester had to spend because he had to talk to various business users and, you know, understand the business process, interact with different people, etc.

And then they went and said, hey, you know what? We took one and a half hours. We got the same output that you did. So those are the things which are exciting customers a lot. Of course, the things which customers are bothered about is in terms of, you know, what happens to security?

Is this, you know, what will I do, who will do the training? So a lot of things, especially with testing, given the amount of data that exists, our philosophy is that you really don't need to train an LLM for your testing needs specifically you can look at whatever is available, you know, whatever are the LLMs which are there in the market and build on top of that.

But yeah, you know, that is the security integrating that those are the kind of, you know, challenges with the customers are thinking through, but everybody is very excited in terms of how do you integrate this? How do you really look at it? Whether it be from a development perspective, whether it be from a testing perspective.

Tell us about the use cases that you are working with, and collaborate with us. So there is one set of customers that we work with, which are big customers by their own rights, I have been really adopting the modern ways of working or new ways of working. And those customers we end up collaborating with, which is in terms of, we've got a, because a lot of those customers have very senior leaders are there, and who have obviously, they're also trying out new things.

So that becomes a collaboration model that we end up doing. There are another set of customers who are like, yes, this is great. If you have something already done, show us it works and come and do it for us. There are, so I think you can either build the entire car for them or they can rent a car from us or they can say that, we've got everything.

We will do it. We need a couple of experts from your end, you know, to be a part of our team. So there are different models, but I think one thing which we are seeing is everybody is extremely excited to exploit what this new technology can do for them.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - I'm sure I think to be honest with you, I foresee in the next few quarters that overload turning into something else because suddenly every application is Gen AI enabled, right? Your operating system is Gen AI enabled, your ERP is Gen AI enabled, your CRM is Gen AI, there's so much of Gen AI, it's gonna get super confusing as a user.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Yes, definitely.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - I don't know what's going to get tested here, by the way.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - So I think one good thing is that we will have a lot of very happy next couple of years leveraging this. In fact, I know my team is working with LambdaTest as well in terms of the solution for clients who are getting into the entire migration or modernization. So for us, a lot of those things, a lot of those clients who are going under very large modernization initiatives of their own.

And I think today it's a lot less of only migration. It's a migration modernization that happens together. There is a lot less of just pure lift and shift. And in those cases, the solutions that we are building along with partners like you, I think those are things that are really going to be a differentiator for us. Customers appreciate a lot of what we are doing.

And yeah, the sky is the limit, at least from the business perspective, but I’m gonna borrow your optimism Maneesh that things will not be as what I’m assuming would be new roles that will get created. I think from a talent perspective, what will do that will become very important for talent.

And I keep on telling this to my larger team as well, that you have to be very, very focused in terms of understanding the trends in the market and get into this continuous upskilling of yourself. I read a very interesting tweet today. So there was this gentleman who had just been laid go from one organization. I'm not going to be taking the names.

And he said that, you know, 20 years of devotion to this organization. And I've just been fired because I now have skills which are not relevant anymore. So if you look at talent today, I think that's one of the things which, and I keep on telling these two youngsters, that ensure that you're constantly upskilling yourself and you're constantly employable.

Because I think we have moved out of that era where we were like, yeah, I've learned something new. This is going to be holding, you know, taking me for the next five years, and five years later, I'll figure out something else. The change of pace is really mind-boggling, right?

Maneesh Sharma (Chief Operating Officer, LambdaTest) - Totally agree. I think in the tech industry, you definitely have to, if you're not in the tech industry, you have to do it more because tech is going to disrupt anything that you're doing anyway. So it definitely is an interesting time. I think the fundamentals don't change, as you were saying.

You know, when I look at you and your team, you are the trusted advisors to customers, right? And the trusted advisor concept is because of the experience you have in working in this space. So there is no amount of Gen AI or autonomous testing that can solve for that experience. I think that's going to be critical for teams as they scale.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Yes, definitely. I think in terms of also, you know, showing your battle cards, and that's one thing which we have seen, you know, to be a real differentiator because nobody wants to be a guinea pig.

So when we talk with our clients, and we'd really have experts and SMEs and you know, who've gone through those failures, who've gone through those successes, who are then going and telling the client that, you know what, I've seen this, you're struggling with this problem. These are the five times I've seen a similar problem.

And these are, you know, at times, you know, there might be a case where I've tried to solve it in five different ways. So I think that's what also makes the trusted advisor piece, you know, happen for us. And I think we've been very lucky 25 years in the industry. I think in the quality engineering space, we have been almost a pioneer of sorts doing a lot of the first time in the industry things.

But what it makes is, you know, it makes us even hungrier to ensure that we're constantly upping our games, and working with partners like you definitely ensures that we are, we keep on honing ourselves and sharpening our trade.

Because the kind of new thinking that you guys bring onto the table are obviously things that allow us to think differently, think out of the box and then ensure that the partnership is a win-win all around. So yeah, those are some of the interesting things that we are seeing now.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - Yeah, optimistic times. As I said, customers are confused. We definitely have to go and help them. Everybody's confused. What's going to happen? But I think there was an interesting point one customer made. I was talking to them a couple of weeks back.

And I asked them, hey, are you using all of these code-generation tools for writing your tests and writing code? They said, yeah, we played around with it. It's good, but they shared a very unique problem, which sort of blew my mind. They said, you know, a lot of these models have been trained on 2020 data or 2021 data or 2022 data.

And the world we're in from a software engineering perspective, there is a new version or minor version of a language or a framework that comes out. So if I'm using, you know, language “X” version nine or two dot three, a model has been trained on version eight or seven dot four. Right?

So that model will always try to keep up with what is happening today. And I find that to be very interesting and sort of like an indicator of what's going to happen in the future on how are these models going to keep catching up.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Right, right. I think that is very true. And, you know, just goes to what I was saying earlier. How are you training the thing, you know, with again, with all of the kind of changes that's happening is going to be very, very critical.

So think of a scenario, right? You're using a version that has been trained by ‘X’ and then it keeps on learning from what people are doing and the people are doing the wrong things.

So the language keeps on learning the wrong things, not realizing that it is wrong. And the people who are using the output are using the wrong output. So it's almost like a vicious cycle.

So I think both the industry will also need to figure out because what you just mentioned, right? It will become a fairly expensive proposition if one has one says that you know, every six months or every three months, something is happening, and I continuously need to keep on changing.

Hence I realized what you said that you will not get into a situation where the language or the bot or the tool takes care of everything, while everybody wants to believe that, you know, everything will be autonomous, but you will have to have the human at the center. Right.

And I think we will have to figure out how that human is at the top of his or her game. So the cost, the quality of my output, the speed at which I am, I am getting that getting all of this. And then the value that I'm getting, you know, at the intersection of all of these three. The ideal thing is if the value is great, which means my return on investment is fantastic. That's the sweet spot.

But if any one of these is out of balance, the value that I'm going to get is obviously going to be a lot lower. And then I'll have to look and see what is the ROI that I'm giving that I'm getting. And thus, does it make sense to even go down this path in a very, very aggressive way?

I guess there will be, you know, some pioneers in the industry who will solve this as well. And yeah, we will always be around to ensure that whatever they are turning out, it is, again, as I said when I started it is.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - I think that was a very important point you made which was on, you know, TCO and ROI because customers always look at that there's a lot of technology available and forget about Gen AI just regular technology there's so much available, but sometimes it just doesn't make sense to do it because the returns don't justify it.

So I think these models will also evolve on how Gen AI will impact profitability or growth or all of these things.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Absolutely, absolutely. For example, we've done some analysis. So there is a particular, so let's say you have a, and I'll talk about only the testing part.

Let's say you have a very small testing team, and your program is, you know, going to be, let's say, it's not going to be a long-running program. It doesn't make sense to say that, you know, I'm going to be investing in Gen AI unless at an organization level, you've already done something from a return perspective, it doesn't make sense.

Let's assume that you are in an industry, which is not that fast moving. You have a very fixed window of release. There's a fixed timeline every three weeks. You, you, every three months, let's say you're, you're going to deliver something into production. So you don't need to have a release on demand kind of a model.

And then, you know, I'm only going to look at these new ways to see am I am building for quality? Am I building resilience? Am I doing right the first time? Thus does it make sense to make a heavy investment into a technology like, and I'll call it technology for the time being into a new tech like Generative AI? I think those are the things that businesses will end up grappling with because of the value versus the cost versus the ROI equation.

If you don't get it right, you can end up burning up a lot of cash. It's straight for us, but you can end up burning a lot of cash, and it might not be giving you the kind of returns that you're looking for. So would not make business sense.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - That's the trough of disillusionment, as you rightfully said at the beginning of this conversation. I think we're almost at the time but I would leave one question for you to answer. There's a lot of changes going on. What should testers across disciplines look forward to? What should they be looking at doing right now to embrace this change that is already happening?

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - I'm going to give maybe a little bit of a cookie cutter answer to start with. And then I'll elaborate on it. Two things. There has to be openness towards whatever is happening a lot of times. And even today I've seen testers who get into this thing that my job is only testing. I'm not bothered about what is happening otherwise, right?

I've tested anything else developer problem, somebody else's problem. A tester and developer, even if the wall does not completely disappear, they're going to be working in a very, very collaborative manner.

When you're doing that, you have to have a change in mindset that this entire thing is my problem. It is not only about the final testing. I don't, I'm not somebody who appears at the end.

So if I'm doing this entire left shifting, I need to be open to understanding from business, what is their thought process? Why are they doing what they are doing? I need to have the ability to raise my hand and say, hey, you know what? It does not make sense.

I need to have the ability to think of, okay, functionally, if this is what you're looking for, but non-functionally, what does it mean? What are implications, you know, what from all of your lists from scalability, from usability, from you know, test performability, all of those things.

What does it mean? How many users are there going to be, you know, is this application going to be rolled out across? How many countries do I need to look at different languages? Which I think possibly one would be, one would end up doing.

But a lot of time I've seen testers who just don't even think of asking questions around the non-functional requirement because their thing is, yeah, the PO will have thought of it. If the PO has not thought of it in the story, why do I need to look at it?

So those are the things, you know, those are the changes in the mindset, which the tester of tomorrow has to do. He has to be able to raise his hands and say, this is my perspective, right? Look at, you know, look at security today. Testers don't look at security.

So there are some security experts. He is going to come and do something. And then again, you know, the developers will figure out if they have to change their code. It's not that you're a part of the, you know, you're, you're an extended arm of the business.

You have to think through that if the business wants to hit a particular date for their product, as far as time to market is concerned. And if you know that these things are things that can be a difference between make or break, you have to tell all of that to your developer.

You have to tell that to your PO. I'll give you a simple example. We worked with this particular client who was developing a new POS application. Their old application was good, but they wanted to move to a new technology, etc.

It was about almost a one-and-a-half-year development cycle following Agile, everything happening textbook, right? They had not looked at any of the NFRs when they rolled and they had done interim releases. So everything was working perfectly. When they rolled it out across their multiple warehouses within five minutes, they could it crashed. And I was like, how is this possible?

How is it that nobody looked at, you know, you looked at your NFRs in terms of the things that we consider so basic? How is it that the developer missed it? How is it that even a tester missed it? Those are the kinds of changes in mindset that one has to look at.

And then with all of this, which I just talked about, the tester of today has to have along with it what I like to call the growth mindset, which is how do I constantly ensure that I'm not boxed into my current role, but I'm understanding even if I'm not learning because as you rightly said Maneesh, the role will morph into something else.

If I'm not readying myself for it, when the role morphs, I will find that I knew what I knew. I don't know what's next. So, you know, that famous quote, what got you here will not get you there, will become very, very true.

So that's what the tester of today will have to figure out, of course, in the testing area itself there, you know, all of it is not going to get disrupted, but if one thinks that I am going to look at only this right. We end up talking about the entire IE to T, you know, the T shape developer or the T shape tester all of that is going to become even more critical in today's age.

So I think that's what the tester of today and tomorrow has to think about. This is a, I can talk almost the entire day. There's a topic that is very, very close to my heart because I don't think people get enough of that mentoring, especially testers who are just entering the workspace.

Unfortunately, with COVID happening, a lot of them possibly have always worked from home. They have not had the opportunity to learn from others, to learn from their seniors. Those are the things which people, whether it's a developer or it's a tester, they need to look out for those kinds of things because it is very easy to get completely disrupted.

And I'll end with a quote with a tweet of Elon Musk, he's apparently saying that by next year, Jenea is going to be smarter than any human.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - I am not too optimistic about that by the way. I don't think that is possible. I don't think so.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Yeah, because I think empathy is something which is obviously extremely critical. And, you know, if you read Simon Sinek, he has a very, very interesting perspective about, you know, the new generation, you know, the XYZ, whatever they're called, and how they require more mentoring, etc.

And if I look at the new testers, you know, that's something which is even more critical for them because I don't think they have the great thing that we had when we started our careers, which was learning from others, really becoming a part of a community, becoming a part of a tribe, somebody whom you see every day, you know, you go to coffee with, you stand next to the water cooler and say, hey, you know what, I've got this problem. Can you tell me how I can solve it?

So that's another thing which I think, you know, the tester of today has to be aware of what they decide to do. People will figure out new ways of doing it, but they have to be aware that these are the things that are going to be extremely important and critical. And I think all of this, if I, if I encapsulate it, it's all about having that growth mindset.

Today is a time when it's really, really important that you don't get stymied within your within your own box. And you really start looking and exploring different things.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - GenAI will be an enabler in all of this to learn a lot more. With that, thank you very much, Ritu, for sharing your experiences and insights. Truly appreciate you taking time out during the weekday. Thank you so much.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Thanks, Vanish, and thanks to your entire team. I do hope your bigger team finds this useful. And I look forward to working more closely with you guys in the coming few months.

Maneesh Sharma (Chief Operating Officer, LambdaTest) - Thank you. I think community always loves to hear from experiences, so I'm sure they will appreciate it. Thank you everybody for joining us here today and looking forward to having you next time around as well. Thank you.

Rituparna Ghosh (Vice President & Head of Quality Engineering & Testing, Wipro) - Thank you!

Past Talks

Flaky Tests from an Engineering PerspectiveFlaky Tests from an Engineering Perspective

In this XP Webinar, you'll learn how to mitigate test unpredictability, optimizing development workflows, & enhancing overall product quality for smoother releases and better user experiences.

Watch Now ...
Testing Tomorrow: Unravelling the AI in QA Beyond AutomationTesting Tomorrow: Unravelling the AI in QA Beyond Automation

In this webinar, you'll discover the future of QA beyond automation as he explores the realm of AI in testing. Unravel the potential of AI to revolutionize QA practices beyond conventional automation.

Watch Now ...
Shifting Accessibility Testing Left with LambdaTest and EvincedShifting Accessibility Testing Left with LambdaTest and Evinced

In this webinar, you'll learn to pioneer accessibility testing's evolution by shifting left with LambdaTest and Evinced. Uncover strategies to streamline workflows, embed inclusivity, and ensure a comprehensive approach for user-centric testing.

Watch Now ...