XP Series Webinar

Building Resilient Quality Engineering Teams: Exploring Emerging Trends and Best Practices

May 01st, 2025

53 Mins

Watch Now

Listen On

prodcastspotifyMusicyouTube
Purva Jain

Purva Jain (Guest)

Director of Quality Engineering, Veeva Systems

Purva Jain
Harshit Paul

Harshit Paul (Host)

Director of Product Marketing, LambdaTest

LambdaTest

The Full Transcript

Harshit Paul (Director of Product Marketing, LambdaTest) - Hello, everyone, and welcome to another exciting episode of the LambdaTest XP Podcast Series. Through XP Series, we dive into a world of insights and innovation, featuring renowned industry experts and business leaders in the testing and QA ecosystem. I'm your host, Harshit Paul, Director of Product Marketing here at LambdaTest, and it's a pleasure to have you with us today.

Today, we are going to talk about the growing role of generative AI in test strategies and how shift left and shift right approaches are transforming testing as we know it. We'll also deep dive into test observability and understand why it's not just a buzzword, but a cornerstone for continuous quality.

And to walk us through this episode, have Purva Jain, Director of Quality Engineering at Veeva Systems. Purva brings 18 years of experience across the life sciences and finance industries. She's not just a quality champion, but also a tech trailblazer, having quality-led mission-critical platforms like Vault CRM, Veeva CRM, Align, and more. Hey, Purva, we're absolutely thrilled to have you with us.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Thank you. Thank you so much for that beautiful introduction and I'm really really flattered that you guys invited me here to talk about my favorite topic, quality engineering.

Harshit Paul (Director of Product Marketing, LambdaTest) - And it's great to have you here as well. And I'm sure the audience would be looking forward to the session. talking about the session, Purva will help us get into the future of automation, how to design scalable, maintainable test architectures, and how you can take decision when it comes to test strategies.

Beyond the tools, frameworks, we'll discuss the people aspect, how to nurture leadership in quality engineering and create a culture of accountability. There's a lot that's coming your way in this episode. And starting it off with the very first question towards you, Purva, what fundamental shifts have you observed in quality engineering over the past few years in your organizations that everybody needs to be aware of?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, so when I started my career, started my career in engineering and that time quality engineering was very, very isolated and it was very much done as an afterthought. So design and development was always the priority and then once it is done, it was mostly the waterfall model. Once it is done, then we would give the entire BRD documents and then there would be a handoff and then quality team would come in in the end and then it was always a struggle.

But now things have completely changed like from the past like 10 years I see quality engineering has become an integral part of the development life cycle. It's not just like with the entire agile methodology, it has changed, has evolved from just basic testing to focus on holistic customer experience across all business processes.

So the shift is like, basically, now quality engineers, are not just like testing what the functional spec has, but they have to be the champions, right? As you mentioned, like, they have to understand the tech stack, what is the technology the application is built on. They have to understand the end to end flow, the process flow, the logs, the monitoring, the observability, every aspect of the product.

So they are like not just the quality owner of the application, but also the whole business process, how we can be good at everything from the right from the scratch, from the designing to the delivery to customer success, the feedback from the customer. Everywhere, every aspect now quality engineer is involved.

So I see that shift, and that's what we call it like shift left approach where quality is started from the beginning, like it's involved, like folks are involved from the beginning. So I see that and the shift has happened because the industry is moving towards customer success. Customer success has become an obsession now. Every company, if you see like all the big product based companies, their main core value is customer success.

So how can you get customer success? You can only get customer success with a quality application. Earlier things were more like, okay, engineering driven organizations. If you see 10 years ago, was more like that, okay, engineers would decide how things and product management role was also not that critical.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. Yeah, customer centricity was more at bay, you know. People were more focused about shipping things fast, adding more to product, but somewhere down the line, that customer experience was not as heavily focused back then.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Exactly. Yes. Yeah, so now because we have all industries shifted towards customer success, everyone really needs quality in their applications. And there's so much competition that what is the cutting edge? When we ask our customers that, what do you think? Why we are leading the industry in life sciences right now? And the response is because we deliver quality software.

So that's give an edge to an application are sometimes there are companies who have started an idea and then come up with an application. But then some other companies, have just copy pasted the same thing, but just done it with quality. And they have succeeded.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right, and I think like to an extent where know people might also have heard this a lot that quality is everyone's responsibility and I think that is also something we have observed you know as a shift because earlier it was more subjected to these QE teams, QA teams only you know but now everybody's looking at it in a more holistic way like you just said right that you know they are coordinating with CS teams and they're just you know trying to make sure that the sooner they get in but at the same time people are also making sure that you know it is not just one team's job, right? It's everyone's job.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Exactly, exactly. It's like, and they become like quality engineers, right? They become the influencers, basically for continuous improvement. They become the final catalyst that improves the end product or the service, right? So it's everybody's business. And I love this quote from Aristotle that quality is not an act; it’s a habit.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah. yes, it is.

Purva Jain (Director of Quality Engineering, Veeva Systems) - And it's really true, like I truly believe that you cannot just be quality focused on one day. It's the industry is going towards that trend where it's everybody's business as you said. So it's an act you have to like, it's not an act, sorry, you have to do it on a daily basis. So yeah, I mean, it's very exciting to see how this role has evolved from QA analyst, like quality assurance, to a quality engineering role.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, engineering team, exactly. speaking of all these shifts and trends, is this such, we hear so much about AI these days. So how do you think QE teams can effectively validate these AI-driven features in applications while using AI for testing?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, this is my latest thing which I am kind of noodling on and working towards it. We have some AI products in our product, AI modules. So we are very much into it right now. And it's a challenge. It's still growing and there is a lot out in market. Every day there is a new thing coming up. There is DeepSeek, there is ChatGPT new, there is Manus and every day you wake up, you first see what's new.

So it's very important to first understand, I feel, that what is the purpose of your AI application. If you are planning to test this AI application, what's your purpose and what is the output? Is it recommending something? Is it detecting patterns or it's making a decision? Like what is the purpose first?

First define the purpose, define the clear acceptance criteria. And usually with AI, it's not like just pass or fail, right? The outcome is very different here. We cannot just say that, okay, this is working or this is not working. A lot of it is more probabilistic. So the acceptance criteria could be like 80 % of your AI-generated. Like if you talk about a real world scenario, say for example, in an e-comm, you are trying to test recommendation engine.

Harshit Paul (Director of Product Marketing, LambdaTest) - Make sense.

Purva Jain (Director of Quality Engineering, Veeva Systems) - So your acceptance criteria would be that 80% of this AI-generated recommendations, they should be relevant based on your test data, if they are not relevant. But you cannot like just do that, okay, this recommendation is a pass or a fail, you have to dig in deeper, right? So I believe that in order to test an AI tool, AI can help you, but like, you have to have a mix of AI and traditional testing approach as well.

So I would say it's a hybrid approach. And traditional testing, when I say you can use those API responses, what you usually test, the performance, if the throughput is coming out and like on time, and then real-world scenarios. And then when you talk about AIs, there are areas where AI can help you.

So I created a spreadsheet where I kind of broke down all the tasks which my teams are doing on a daily basis. And I kind of categorized them, what can be done with AI, what cannot be done with AI. So I did that exercise, which was very helpful. And what came out of that exercise was that intelligent test case generation, that like test case generation could be done through AI.

Of course, they would not be like 100% accurate, but they can give you at least like the efficiency. You don't have to write like big and especially like when I'm working on a highly regulated enterprise software, right? So, there is a difference. If I talk about e-commerce where test cases are very, very similar. If you are booking a hotel room from Hilton versus Marriott, the use cases are very, very similar.

So there, like you can exploit AI way more, right? Because these are generic. Whenever these are models like test case generation tools are out there AI and they are being modeled. was in one of the conference where I asked the same question that hey, the test case generation for a very regulated proprietary enterprise software. It's not there yet that it would give you accurate results. You can use it to efficiently generate and then the test case review, which is the second round.

Purva Jain (Director of Quality Engineering, Veeva Systems) - That round quality engineers will have to do to kind of review AI's work as of now and modify it a little bit. But basic tests which an application needs, it's there. You can use test case generation tools to generate the test cases from AI. That's there. But if I say 100%, we are not there yet.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yes, I agree.

Purva Jain (Director of Quality Engineering, Veeva Systems) - The second place would be test selection and prioritization. That's also machine learning models can identify high-risk areas in your application. And based on the past failures and code changes, they can allow the teams to focus more on efficient test case selection process.

Harshit Paul (Director of Product Marketing, LambdaTest) - Actually predictive side of things, where these AI sort of things can look at your past test results and predict you where you're heading. Maybe a big time feature that can help you understand how flaky tests are being resolved over time. And it can maybe categorize errors for you for that instance. So yes, generated AI, of course, you can create test cases. But there's also predictive AI, which can help you understand what's happening down to the execution level over a given period of time. So big big advantage over there.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yes. Exactly. Yeah, big advantage there. And then automation, like talk about automation. So now it's not just like there are agents for everything. Agent to create a test case, agent to create and prioritize test cases, agent to automate one click automation, no code automation, you name it and we have it. So it's there.

You can use it. But again, as I mentioned earlier on, for an enterprise app, these one-click automations, right now they are not so thorough and not so robust. But for web applications out there, which are not that complex and they have basic flows, it works like a charm. I've seen that.

Harshit Paul (Director of Product Marketing, LambdaTest) - All right.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Definitely very useful. Visual testing now, like earlier we used to say that automation cannot capture those pixel by pixel. And now with visual testing, there are AppliTools, LambdaTest, Percy, right? So many tools. They are doing the visual testing much better than manual testers. People joke about it that, I used to wear glasses to do this testing, but with visual testing with AI. That is the place where you can save time for your quality engineers.

Harshit Paul (Director of Product Marketing, LambdaTest) - That's efficiency. That's real efficiency there, yes.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, also, I think when you have these automation frameworks, which are bulky, many teams, they struggle with how to maintain them, right? So then with AI, like self-healing test automation, it automatically heals the test based on the functionality. But again, then there are sometimes where your requirements are so tight that you cannot just rely on that self-healing automation tool.

You still have to go and check if your automation is actually healed the right way. Because I have noticed that sometimes AI would make a decision based on the past learning, right? That, okay, this pop-up looks right. And it's by design and then would pass. But actually, it's a regression. So sometimes we have to be very careful about these self-healing test automation, but 80% it works.

Test data generation is one more area where AI has taken like there were earlier, I remember like five or 10 years ago. First, we started with using tools to generate data. Then everybody started coding their scripts to generate their own data. And now with AI you just write that, I want this data generated in this format. All those other tools, like I remember mockroo, we were using that used to script in Ruby. Now it's so easy with AI, you can save so much time.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, exactly. I think like as a tester or as a QE altogether, you're always against the clock because there's always so much to test. So looking at these AI-based features, it's more about how productive you can be rather than, you know, are they going to solve my problem 100%?

No, because of course they're in that stage where things are still being mature to a certain extent. But yes, they are evolving fast and they are solving a lot of use cases for a lot of teams. Now they may not be solving 100% of your use cases, but I'm pretty sure you can get a good value out of it if you try to get your hands dirty.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, yeah, I mean, it's you have to try it out there what works for your application, but you cannot ignore AI. It is there and it is there to stay. So it's better to like upskill yourself to kind of and we'll touch upon it a little bit more. But yeah, mostly and one more area which I could think of is debugging.

Harshit Paul (Director of Product Marketing, LambdaTest) - It is near to stay.

Purva Jain (Director of Quality Engineering, Veeva Systems) - I started using, think, I mean, when you are debugging an issue, so just basic Chrome AI, like in developer tools, there is AI assistance, which is really helpful for me. If you are testing any web application, and I would highly recommend turning that on in Chrome, go to developer tool and like the AI assistance, what it gives you is like, it gives you an insight on the network call, which you are trying to see why it failed, why it is slow.

So it breaks down and gives the analysis, which is so helpful when you're logging a defect. is so that whole analysis AI has already done the initial analysis, which a developer has to do. You are already adding that in your JIRA in your defect, right? And it's just, it saves a lot of time.

Harshit Paul (Director of Product Marketing, LambdaTest) - It does, it does. mean, there's also when we talk about debugging AI-based RCA is another thing that's trending, right? So saving a lot of effort there as well. But yes, so there's a lot about AI and you we previously touched about shift left a bit, right? But there's also shift right aspect to testing approaches, right? So between shift left and shift right testing, what are some practical strategies that you think teams can use effectively?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, so shift left is definitely important because as we were talking about how QA engineers, like quality engineers have evolved, they need to be evolved from the very early phase on because it is very, I think it's like, I don't know why we were not doing that earlier on, but it's very basic and it's very obvious it saves a lot of time in releasing and like release planning.

Because you know that your code is solid and you will hit the delivery date on time. Because you have been testing throughout. It's not like a last minute surprise that oops, we have committed this to the customers. And now we cannot make it. Right. So shift left, really like, I really believe in that.

But at the same time, shift right is also getting very, very important because... I'll quickly explain what shift left and shift right are first. Shift left is when quality engineers are involved from the beginning and shift right is when you are trying to test things in production.

So if you are just doing shift right it might not work out for most of the big organization. So a combined approach would be more ideal when you are delivering a solid code in production. But at the same time, for future features, you are also doing the shift right approach where you are trying to get the data and feedback from your production environment, right?

So some of the shift right approaches, which we have seen in the past and are trending now are like feature flags. You use LaunchDarkly or other tools like that, and kind of deploy your new features in production, but they are hidden. And you can get valid information. are only certain there is always yeah with AI you can generate as much as data as you can.

You can synthesize different types of scenarios, but it is still not as good as testing like the data, real data, real concurrency, right? So for that shift right is important real time user what users are doing on a real time basis, right, especially for customer facing applications where if you are testing a security app, how many times customers are looking at their security devices, those input, very, very important.

And even if you have to train your AI model, you need this data for your future features. So it's very important to do that. And also, one more thing which we do is the A/B testing approach where you try to ship the features to certain sets of user and it can be based on whatever you want to do.

Some people do it based on regions, some people do it based on levels like tiers, like, we'll give these features, intro these features to first-tier customers of our application. Those are something which, and monitoring, real time monitoring, Like having, and we'll talk about it in observability, but that's very important as well. So yeah, it's a combination of both shift left and shift right, which could like help an organization to have a solid feedback loop at the same time kind of.

Harshit Paul (Director of Product Marketing, LambdaTest) - That makes a lot of sense actually. So it has to be a hybrid approach. You can't just stick to one. Just make sure shift left there to push things faster. And you put in a very interesting word there, observability, right? And every team is chasing, you know, that observability. So they are aware of what's happening, right?

But there are teams who are struggling to make sense out of massive data generated by these observability tools. So what advice would you cater to this audience? How can they be better?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, you said that right. Observability, we are all talking about it. We use Prometheus, there's Datadog, there are other tools out there. They generate massive data and it can get overwhelming to kind of find what is actually valuable. So the few things which I feel are very, important when you are implementing a new tool for observability is structure. You have to have a structure.

And first, before even getting to structure, it's important to first understand what are you looking for without just like just getting like just looking into a tool and seeing these fancy dashboards, will not, gets like there was a point where we were like looking at so many dashboards and then we like, like we took a step back and we are like, what are the most important things which we need observability on, right? We cannot like monitor everything.

So first what, and then you get to like a structure. Tags are very important, right? And one thing which like logs, traces, and metrics. One thing which is, which really helped us is like all of these things, these need to be part of your life cycle. It should not come as a different feature that, okay, I am delivering a feature right now and then we will do the metrics and the tracing and the logging part of it later.

That later never comes to that. That thing that's like tomorrow never comes. It's like that. Right. So like, when we are developing a feature, it should come with these features, like things. It's same as how we were saying that quality engineering is part of the feature squad now.

So these metrics need to be developed when you are developing the feature, because what happens, and it's also a shift. Earlier, what was happening, like you develop a feature and then your CEO or your GM is asking that, are folks even using how many people are using this feature and like you want to make decision based on what the usage of the feature, right?

And then we would come back, create another like story or epic to add these hooks to get the data to answer these questions, right? So it was also an afterthought. And my suggestion would be just ingrain it in your regular sprint cycle when you are handing off a feature to QA or like when you are developing it, it should just it should be part of your handoff that hey, what metrics logs and traces are being developed with this feature?

How will I answer this question from the log? It should be that should be your question before even starting to test. Alerting is very important. Alerting fatigue.

Harshit Paul (Director of Product Marketing, LambdaTest) - We all been there. We all been there.

Purva Jain (Director of Quality Engineering, Veeva Systems) - I suffer with that. I suffer. Yeah, so that's one more thing. That invest and design the alerting the right way. You do not want alert on every API failure because sometimes you have to understand that if I get this alert, will it be a P1? Will it be a critical? That is when you put that in your alert and then you start your escalation runbook after that.

Harshit Paul (Director of Product Marketing, LambdaTest) - Makes a lot of sense.

Purva Jain (Director of Quality Engineering, Veeva Systems) - So those are like all these four things, they go hand in hand. Like first thing, understand what you are looking for, then put a structure to your observability data set the right alerts and then just alerts are not enough. You have to have a run book to act on that alert. Who will act on it? When will this happen? Define the right SLAs for it. Every alert need to have a certain priority and a run book, an action item on it.

So that's for observability. But it's the key, mean, it's very important. And that's one of the things which I always remind QEs and developers that please focus on building the right metrics when you are working on a feature, because then later you will be out of context. You will never come back to it.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. Those are actionable insights really what you've just now told you. It's been an entire playbook. Honestly, I feel like a lot to take from there. But there's also this dilemma of speed and quality going hand in hand. And people could probably write entire books on the subject. But if our viewers were to take back something with them, how would you suggest they balance speed and thoroughness and automation strategies?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, I know. mean, like I can talk on and on about it, but I would I would one critical thing which I would say is keep it simple. Don't over optimize for frameworks and scripts. There is so much out in market every day. And I was saying, right, like Selenium, Playwright, this that like stick to one thing and like see if it's giving you the ROI.

Prioritize your critical use case and automate only them. Do not go for automating everything. It is a dream which will always stay a dream. Earlier, I used to also chase that when I was young in my career. And now I think after 18 years, I can confidently say that, and I'm telling you, with AI also now, I'm like, you.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, 100% automation.

Purva Jain (Director of Quality Engineering, Veeva Systems) - You can do whatever there is like. You cannot achieve 100% automation. That's just not possible. We have to be practical because we are delivering so much every day. The world where like you are coding and immediately automation, we are not there yet. But yes, there are places where it is in sprint automation is real. There are places where I'm not denying that.

Harshit Paul (Director of Product Marketing, LambdaTest) - That's great.

Purva Jain (Director of Quality Engineering, Veeva Systems) - It's there, but still I would suggest for like big enterprise softwares, I would say that like focus on your critical use cases, keep it simple, do not invest in automating edge cases because by the time you automate edge cases, there will be different set of edge cases. There might be time when you have to retire these edge cases. Just keep it simple.

And one more key thing which I have from my experience is automation, for automation to succeed in an organization, it need everyone to be involved in it. involve and align all the stakeholders from the beginning. And this is like for any leadership, right? If you have individuals, who are invested and were part of the decision making, they would be more committed to it.

Versus if you have decided things in a silo and then you are coming back and saying, please use this, please use this framework and let's go with these processes, that doesn't work out well. Then people start blaming that, this strategy didn't work or this tool didn't work or automation is lagging behind.

So I would suggest that like, because it's a partnership, you cannot just do automation also in like, in a separate. So I would say like, in a scrum team, dev, PM, functional QAs and automation, they should all be together. And they should all like work as a team to achieve the automation for their areas.

Harshit Paul (Director of Product Marketing, LambdaTest) - Got it. Makes a lot of sense. So keep it simple and work together. I mean, a lot of innovations that we have in the past decades, they've all been just about breaking the silos, like you mentioned, right? So extremely critical. But as from QE lead perspective, they might also stumble upon situations where there is delivery pressure, but they also have to keep quality in check. So how do they advocate quality when faced with such delivery pressures?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, I think this is more for the leaders, would say. The leaders, need to, first of all, they need to understand, they need to have a thorough understanding of what your customer need. If your quality leader or like quality engineering leader doesn't understand that what is needed, they would not be able to make the right judgment. Every day, every single day, to make a decision between customer success and quality.

As we were talking earlier, right? Customer success is core value and obsession for everyone now, right? And in order to achieve that, you have to take a certain amount of risk, you have to take those calculative risks on a daily basis. And you will be able to make the right decision when you understand the business processes, when you understand your tech stack, when you understand what is critical to the customer.

Right? So I'll tell you an example. If I don't have an understanding of what customer uses more versus less, if there is a feature which has more issues, but I know it has like 50 or 55 defects. But we have committed this feature to a customer and we still have to ship it. Then you have to understand, you have to have the right dashboards in front of you to see the usage and what is critical.

And if you see that in the entire market, no one is using this feature and we are shipping this feature only for this customer and they are it will take them time to kind of configure it and by the time we will fix these defects, you make the decision to ship it. But then there are some features which are like critical and absolute must for them to work on a daily basis. That's where that's the battle you pick.

That's where you say that we cannot compromise on quality here. understanding your customers is the key. So that's why I always suggest QA leaders to work closely with your product managers to understand on a daily basis what customers want to make these decisions. Right decisions, very important. You have to present it to your leadership that, this is, I don't feel like you are, I'm the one who signs off on these releases, right? So that's always a balance to make. You cannot aspire for zero defect. Defects are going to come. Otherwise, job security, right? I will not exist. So yeah.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, I think so it all comes down to what we started off and pitched about at the beginning. Customer centricity, right? Work closely, identify what are your mission critical features you can never compromise upon, what things customers are making the most value of, and just make sure that they don't break, right?

And you have to be vocal. I think a lot of it is also around how vocal leaders need to be when placing their ETAs and just explaining these deadlines. And they can only be confident and be vocal when they know what's happening. So as you said, know the product, know what's being tested, know your customers, know it all.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, and you touched on a very important thing. Vocal is very important. There is a certain kind of personality which is needed if you are a quality leader. Sometimes people talk about there are different, everyone has a different strength, right? But this is one thing which is absolutely needed. You have to, like if there is a room full of people.

Sometimes it is the quality leader who has to speak up and say that, no, we are not ready for this release. So being vocal is absolutely needed. And you do not have to be the loudest in the room, but you just need to make your point and back it up with proper data. It should not come from your feelings.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yes, data goes wrong.

Purva Jain (Director of Quality Engineering, Veeva Systems) - I feel that, okay, this team worked on it so the quality might not be good. No, whenever you are presenting it, present it with the right set of data. This is what we tested. This is what I feel the quality should be and this is where we are at. And one more thing which I advise all the like budding leaders is don't just come with a problem.

Always think of a solution beforehand. That it's not just enough for a quality leader to say that, this is a report of defect and what are the solutions? First thing is a leader should always be part of your life cycle. It should not be that last day you are reviewing all the defects and saying it's a no-go. No, absolutely not. That's not the right approach.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Every day you have to be monitoring all these features quads and understand where the risk is and intervene when needed. Trust, but like when you feel that, okay, this squad is not going anywhere. What can we do? Talk to your engineering managers and product managers, talk to your partners, figure out a plan, but you have to be the first one to initiate that plan.

So, I always say that do not just come with a problem, always think of it, of a solution in your background, present multiple solution and let the leader or the group decide what is best for that particular situation.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, and I think that's really important because a lot of time, know, especially in quality, people assume that they have to alert the problem. not just, you know, don't keep yourself focused about narrating problems. Be a problem solver. You know, that's important as well, especially if you're leaders. You can't just be vocal about what's breaking. You have to understand why it's breaking.

And you have to, you know, the more you get that, the better and the faster you could fix those things, right? So just like you said, break down the silos, break down your bias, and make sure that you are there as a problem solver. Nothing else, right?

And we talked about silos as well, and I think you might need to talk a little bit about that, because a lot of times, as much as leaders would want their teams to be transparent, let's be honest. There's always something that is happening in some corner of the workspace where not everybody's aware of, right?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Exactly.

Harshit Paul (Director of Product Marketing, LambdaTest) - So just some sort of practical approaches that you think can help break down these silos between QE and say other technical teams.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yeah, that's a good question. Right now, with the speed which we are going and every industry is going with, it's very important to not create these small groups of information. Collaboration is the key. You have to collaborate. And I feel that this approach of feature squads hanging out together works out well for us. We try to trust the whole feature squad with their feature.

So it's not like that QA team is separate team and devs are separate or PMs. No, you are all working towards a specific cause to deliver this feature and you are one team. It's not QA versus dev versus. So one team, one mission, you have to foster that, like give them a purpose, give the team a purpose and make that such the setup should be such that they feel that they are a team. And it's not like, so that's why like we always we have like I think our CEO.

Harshit Paul (Director of Product Marketing, LambdaTest) - One team, one mission, quality.

Purva Jain (Director of Quality Engineering, Veeva Systems) - He always says this thing, engage team working together. That means like, it's not about like your title or your team. It's about the purpose. You are driven together as a purpose and every team, right? Every team. This team is like that when I'm talking about team, it's about like dev, PM and QA's who are working towards a feature squad, a feature squad, right?

They have us like everyone has their own strengths. So identify as a leader, you should identify what is what like who whose strength is what basically and then assign them tasks accordingly so that they can they can shine on what their strength is. And it really works out like if in a squad someone is very good in project management.

Let them lead that. There should not be like, okay, only this person would do this project. Let them take a lead. Let them collect the estimates. Let them kind of come up with a release plan. So it should not be title driven. Sometimes these titles, they kind of restrict people. If you are developing an environment where, okay, this you are dev you can only do of course certain things need to be defined.

But in an atmosphere where people are all collab collaborating and like they have a sense of oneness they will do things and it will break those small blocks or chunks or groups of information as well right if every as I was talking about like if metrics if we if engineering and QA both are talking about metrics and logs. There is no like they are talking same language. Right.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. And a big part of that, they could also jump upon collective code reviews, for instance. Why let it be just one team's responsibility when both, you can have two brains on the same thing and they can understand what's to happen. Right?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Exactly. And that's what is, I think, the need of the hour for quality engineering. They have to understand the same language which the engineers are talking. And these days for engineers also, we try to tell them that it's not just the code, you have to understand the functional aspect of it. Because if you understand the functional aspect of it, your coding quality would automatically increase.

Harshit Paul (Director of Product Marketing, LambdaTest) - Make sense.

Purva Jain (Director of Quality Engineering, Veeva Systems) - So few things which I suggest is like trust folks, trust them and monitor for leaders. Just see from just be there with them when they need it. Let them do together. Let them be together. Then they will they will foster that sense of collaboration that we can do it. If for every team, if for every sync up or every stand up, all the leaders are showing up.

Then the individual contributors, feel that, you know what, these people, these leaders will take care of all of the release planning and other things. You make that responsibility for the squad and then you see they do wonders. So yes, exactly. then sometimes it's important. know everyone is these days adapting remote work, which is great, right?

It taps into a completely different world altogether, right? When you can hire remote folks, you can hire so much diverse talent, which is amazing, right? At the same time, try to make sure that they do meet once in a while, right? They all sit together, have a good dinner, because in the end.

It's the relationships that would take you far. If you understand what that person is, personally at a personal level, it would be easy to talk to them. The conversations would just flow naturally in your standups and when you need help at 12 o'clock in the night. It would be easy to approach personal social aspect is absolutely needed. So if possible, once in a while get together, have a drink, just chill, forget about work and it would come back in your work. It's very important.

Harshit Paul (Director of Product Marketing, LambdaTest) - So one team, one mission, and one party every once in a while. Right? Of course. Yes, indeed. So, you know, of course, we talked about so many things. And, before we close this episode, there's one thing that's probably in everybody's mind these days. And that is coming down to the emerging tech, right? What do you believe will have the biggest impact in quality engineering in, say, the next four to five years?

Purva Jain (Director of Quality Engineering, Veeva Systems) - Yes, definitely. Work hard, party harder. It's hard, I mean, it's hard to like just point to one thing which could like impact things, but AI is here to stay, I would say. as one thing which I feel is like people when automation came in, that time also folks were like very worried that, okay, functional QAs will lose their jobs and things like that.

And now it's the second wave, I feel like with AI, again, everyone started questioning. This time it's not just QA, they are questioning even engineers, right? Like, do we really need engineers when there are agents coding for us? Next four or five years, I don't think it's going to like change drastically. Every time I've seen there is a wave of a hype, and then the hype settles and the reality kicks in.

So I feel that yes, AI and ML, are there, all these technologies, they are great tools, everything is there for us to make us more efficient. So that quality engineers right now, like, if without AI earlier, they were like, it's no time, there is no time to generate data, would take like one day or two day to generate data.

So you are losing on to your strategy and your functional domain value add. You are not able to do that because you were so stuck on this tech burden. So AI is here to help us reduce that tech burden. And it will continue to evolve more. And in next four, five years, like I was talking about these healing tools, right? They are still a little glitchy and like data generation, edge scenarios. Maybe that will improve, I feel.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. And there's AI agents. Everybody's cooking their own AI agents. So interesting activity over there altogether. Let's see what's coming up. So that's about it. Look, as you rightly said, AI is here and it is here to stay. And with AI at Horizon, there is so many things evolving at such pace that I believe, forget about four to five years.

We don't know what's happening in four to five months. Something new could spin up out of nowhere, like literally. Now we're talking about MCPs, which are basically model context protocol, standardizing the way these AI agents would communicate with each other. So it's just innovating at a crazy fast pace at this point of time.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Exactly. Exactly. Yeah and I mean, again, right for even forget like about just like about in quality engineering. It's it's in life. You have to keep keep up at it. You have to keep learning on a daily basis. Right.

Not just with technology, but in life also relationships. You have to learn about your partner, your son, daughter on a daily basis. It's an ongoing effort. So if you want to stay relevant, keep up at it. Don't worry about, this would make me this. If you are keeping up at it, whatever comes, just take it as a challenge.

Harshit Paul (Director of Product Marketing, LambdaTest) - Perfectly said and with that we will end this episode and what a session poor what? Thank you so much for joining us and sharing your invaluable perspective and to our audience who's been listening to us all this time. Thank you so much for tuning in and being a part of this engaging conversation on the LambdaTest XP Series. Stay tuned for more of the XP series where we continue to spotlight thought leaders and change-makers redefining the world of testing and quality until next time happy testing everyone.

Purva Jain (Director of Quality Engineering, Veeva Systems) - Thank you so much, Harshit. You've been amazing. Thank you.

Harshit Paul (Director of Product Marketing, LambdaTest) - So have you been, thank you so much, everyone. Bye-bye, happy testing.

Guest

Purva Jain

Director of Quality Engineering, Veeva Systems

Strategic Director of Quality Engineering bringing 18 years of experience in the Life Sciences and Finance industries, with a proven track record of establishing and maintaining high-quality standards. Deep expertise in ensuring the quality of key platforms like Vault CRM, Veeva CRM, Network, Nitro, and Align. Adept at integrating and leading quality initiatives involving emerging technologies such as AI, Salesforce, and AWS, alongside Big Data Analytics, Test Management, and Data Warehousing. Holds a B.Tech. in Information Technology and is further enhanced by certifications in Technology Leadership and Women in Tech from Cornell.

Purva Jain
Purva Jain

Host

Harshit Paul

Director of Product Marketing, LambdaTest

Harshit Paul serves as the Director of Product Marketing at LambdaTest, plays a pivotal role in shaping and communicating the value proposition of LambdaTest's innovative testing solutions. Harshit's leadership in product marketing ensures that LambdaTest remains at the forefront of the ever-evolving landscape of software testing, providing solutions that streamline and elevate the testing experience for the global tech community.

LambdaTest
Harshit Paul

Share:

Watch Now

WAYS TO LISTEN

SpotifyApple Podcast
Amazon MusicYouTube