May 29th, 2025
37 Mins
Listen On
Nilmini Ariyawansa (Guest)
Quality Engineering Architect,
KYG TradeKavya (Host)
Director of Product Marketing,
LambdaTestThe Full Transcript
Kavya (Director of Product Marketing, LambdaTest) - Hi, everyone. Welcome to another exciting session of the LambdaTest XP Podcast Series. Through the XP Series, we dive into a world of insights and innovation featuring renowned industry experts and business leaders in the testing and QA ecosystem. I'm your host, Kavya, Director of Product Marketing at LambdaTest, and it's a pleasure to have you all with us today.
Let me introduce you to our guest on the show, Nilmini Ariyawansa, Quality Engineering Architect at KYG Trade. Nilmini brings a wealth of experience in software testing, automation, DevOps, and process excellence. Her career journey is truly inspiring, having previously served as Head of QA and QA Manager, where she successfully led high-performing teams to implement as well as scale quality strategies, streamline testing processes, and drive continuous improvement.
Today's session is all about exploring how to seamlessly integrate testing into agile workflows to deliver high-quality software faster and smarter. Isn't that what everyone wants? As businesses accelerate their digital transformation journeys, the need for efficient and scalable, as well as reliable, testing practices has become more critical than ever.
And Nilmini will dive into real-world challenges faced by teams working at scale and share practical solutions. So let's jump straight into the heart of the discussion. Nilmini, please tell us a bit about yourself and your QA journey with our audience.
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Thank you so much for giving me the opportunity to share my knowledge and insight with the industry. It is truly an honor to be here and contribute to the ongoing conversation around agile testing, data automation, CI/CD integration, and so on.
A little bit about myself. I have a strong passion for quality assurance, process management, automation and CI/CD, DevOps practices and have worked across various other environments throughout my career. I have focused on helping teams, scaling testing practices, improving our efficiency, and ultimately delivering better software faster. Yeah, today I'm eager to share my journey and the lessons I have learned along my way. Yeah, thank you for that.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Nilmini. That's amazing. You know, the first question is, what are the biggest challenges that you see while integrating testing into agile workflows, and what are your recommendations or best practices to overcome them?
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Yeah, there are a lot of challenges. I will highlight a few of them. I mean, it's going to be implemented in an agile, fast-paced environment. The first thing is, you know, an agile mindset. So to do everything, should have a good mindset. We should know the foundation and basic knowledge of agile principles like that, so that in a traditional QA, what we did, but we do testing after development and finding defects and working in isolation environment and manual heavy testing, right?
Mostly, we are doing manual testing in a traditional QA environment. And also we have a lot of documentation without a document says, it's RSVR, it's a QA, not to touch the testing, right? That is the traditional QA. But in an agile QA mindset, we are doing testing during development. We are not waiting until the developer gives us to development releases to start the testing.
In Agile test, we have to start the testing with the development, right? And also, we are not finding defects. What we are doing is preventing defects early in the process. And also working as a part of the team in Agile, have several team members. We are working together. We are doing it collaboratively and always automation-test mindset because we have to deliver fast.
And also we should be the quality champion, not just a document and process owners. So these are very few QA mind tests in the agile. So that is a big challenge. So if you don't have this kind of set up, good quality mindset for the agile environment, definitely it is very big challenge. We cannot come with the particular successful outcome.
So that is one thing. To overcome this, what we have to go for, we have to define a mere agile strategy first. It should be organized wise. So otherwise, it will be a mess with the testing process inside this print and like that. So when come to the agile test strategy, we should go through that test pyramid level, right? We should consider about what are the testing type throughout the testing life cycle and how should we go like that.
So that is one thing. Other challenges is time. Everybody know, right? In agile, we work with the frequent releases and short sprints, typically 10 days long, we call it two weeks length for the sprint. And from that we have eight days the execution for the tasks. The team is to deliver a set of story points after each and every sprint.
However, one of the key challenges we face is limited time for QA because you know the build-out and shut mid-office sprint though closer to the end, then you may not have much time to do it. So this is a time for throughout testing, which can impact this coverage and overall quality. And that's why we always suggest shift left approach.
So we have to start testing beginning of the development. That we should not wait until the developer comes and says we have done, you have to start testing like that. We should, together with developers, we should start early. We have to start the game early. That is one thing.
And as I said before, we should have an agile QA mindset. We have to work together with the developer. We have to use as much as early bug finding, early play spaces right then when come to the right hand side in the QA cycle, we can directly execute the test and give the releases fastly. That is the things to see flip approach, right? So another thing is selecting right automation strategy.
That is a big challenge in the agile environment. We should always go with the fast-paced. So in that case, how we can leverage this, we have to use the best automation strategy that is also one talent. Teams often struggle to decide which tools or technologies to use and what scenario should be automated versus tested manually. This lack of merit places significant gap in the overall strategy and approach.
So we have to mindful to select best automation strategy. I think we can talk in the future how we go for automation in the agile environment. So these are the few challenges I think.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much Nilmini for pointing that out. I was listening to it as you were saying, because I think it starts right with the mindset change in itself. And then you move on to that is embracing that agile mindset, understanding what are the principles that the teams need to utilize or implement. And then of course, from there on, once the foundation is built figuring out the next strategies as well as of course working on the shift left approach and so on.
So thank you for sharing those insights. Moving on to the next question, can you also shed light on how teams or organizations should make a balance through testing with the rapid pace of agile development? Because the releases are happening at a very fast pace as of today how do teams find that fine balance in order to keep up with that pace?
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Well, yeah, so nice question. So that is the one thing in the today industry coming through the agile development methodology, right? We have to deal with the things fastly. Customers ask very quickly, we need like this. So always testing quality is the number one. Because customers do not want to release without quality. They want quality product on time. Right?
So we have to align with that which is why I told you previously, we should go for the shift lab testing approach. That is the methodology, that is the way. So we have to go with the agile environment. So there are a lot of ceremonies in agile. The sprint backlog roomings, sprint planning, and daily scum, so on things.
So QA should participated should participate. Every ceremony is. Every possible ceremony at the beginning. It is not only one key where QA leads if you have a 345 QE or QA should be participate like that level. If it is a customer face. Recom and gathering that is also not a problem. We should participate. We should share everything within the QA team.
Right? So then what we can do, we can start gathering the requirement, analyzing the things at the beginning of the sprint. We can do some design suggestions to the architecture developer before they start to develop. Right? That's what we are calling testing app.
That is the best practices. And also other thing we always think about the test pyramid level. We should start how to do the unit testing and integration testing and the API testing, UI testing, E2O testing, UAD testing. It is like this pyramid level. We have to go like that way. And other things, we have to think about best automation strategy.
So with the shift-left testing approach, we always bind it with the test automation. So that is one of QA mindset. Automation first. So at the beginning, we can drop the test script, we'll get a developer. One developer gives the developer releases to us, then QA just change the particular parameter, URL or something very quickly. Then we do not want to waste time to testing manually. Right. That is the one thing. Simply testing together with automation will give fast speed releases to the speed. That is the way I am using it mostly. I think it is a success.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much for that insight. That is super insightful and good to hear that the automation-first approach actually helped your team. So, moving on to the next question, what's your approach to test data management in CI/CD pipelines while maintaining agile velocity?
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Yes, good question. So let's discuss what is this test data management for CI/CD pipeline. So continuous test data management we call TDM. So it's derived from the principle of continuous testing. And continuous testing is a key component of CI/CD pipeline. And with this, testing and test data management happen across the different phases of CI/CD life cycle.
So always the test data management, we should go with test pyramid level. Because why? Unit and component level testing, we are more synthetic. We have to use more synthetic test data. We have atomic level test cases there. So we can do some hard coded and so on things like that, synthetic test data. When come with integration and some contract level testing, we'll say CD pipeline.
We use hybrid model of test data there. It's like optimized data coverage and balance of data, variety and volume and test data matching and so on things. So we should consider like that. When come to the E2E, UI, UAT level, always more production like test data set we should have. Because we should take lower test data coverage and balance of data variety and volume and except for some security testing and so on things.
So in some data, we have to mask, right? Some we are using Flaky, some we are using mock. It is a different kind of set up with a different testing type. And we have to bind these things together with CI/CD pipeline throughout this data repository. We have to manage this data separately. And whenever the testing type is off, we have to take, retrieve that data, this data from the particular test tree process tree.
And also we have to make sure we shouldn't leak any E2E level test data on this spot in the E1 it is script, even it is the manual testing, no matter. In the CI/CD pipeline, always E2E level test data we are masking because it is more production like test data. Right? So that is the biggest things when we are doing the test data management throughout CI/CD pipeline.
And also, we can use some model-driven approach for test data, as I mentioned previously. We have to maintain the central repository and the flow like that. That is also one thing. And also, if we can use service visualization and conjunction with the TDA model, so that is also best things we can do.
Likewise, we can do several kinds of test data management throughout CI/CD. It's how you model it, right? When modeling it, we should keep in mind our test pyramid level and the security concerns and visualization things and so on things. So that is the way I am mostly using.
Kavya (Director of Product Marketing, LambdaTest) - That sounds great, Nilmini, and very insightful as well to understand more about test data management in itself and how you're sort of tackling it throughout the different cycles within the agile model itself. And just wanted to, out of curiosity, just wanted to understand if you could double click on service visualization part that you mentioned, right? Could you probably share more about it?
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - A service visualization is a well-established practice for agile development and testing. So by visualizing the dependencies with the system on the test defense app, like that way. So we also reduce the test data management burden because sometimes if the data is not accurate, our CI/CD pipeline can be break out. This can be failed. That is a burden for us. We should not depend on that.
In fact, the type of test data used with the synthetic, as I said, whether it is hybrid, it is production-like, correlated with the extent of service visualization used. At the bottom of the test pyramid, we aggressively use both synthetic test data and virtual services. And towards to our pyramid. We use more realistic state data with the real application component when it to the E2E level.
We can use hybrid approach for the middle test. This correlation of progressive service visualization with progressive test data management along the CI/CD 5 live is very insightful. So if you can see some diagram with that is the best way to explain, but unfortunately today I cannot do it. It is, that's why I call we should have always test data repository together with services, gateways and the test pyramid level.
And when you are designing the code, you are getting the automation is stuck throughout the test pyramid level you have to retrieve the database on that level of things. That is the visualization main things. And also when using service, virtual service for test data, we need to make sure that we are maintaining consistency between data used to drive the test, right? That is also one major point. So likewise, there are a lot of things we are engaging with the databases and getting the data from there that’s the virtual services we are using.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much Nilmini for sharing this insights, pretty insightful. Yeah. And moving on to the next question, what are the key benefits and challenges of using generative AI in agile testing?
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - So yeah, Generative AI has the potential to transformation agile testing, right? But it is crucial to implement it thoughtfully, strategically, making sure it enhances manual testing and integrates seamlessly into the existing agile practices. But you know, AI is also born from human creativity and has tremendous potential.
But we must be mindful not to rely on it entirely. Over-reliance on AI could hinder the development of future innovation and creativity in the industry. That is my thought, I guess the human mind must always stay at the center, guiding and enhancing AI's capability. To use AI effectively, we need a solid foundation in knowledge. It may be technical skill, and the logical thinking, ensuring that supposed to use every human abilities rather than replacing them directly.
That is one thing we have to have before using AI things, right? There are a lot of benefits, I think nowadays. We can use faster test case creations and maintenance. And also we can ask to adjust our test cases in a clearly. And also we can learn some automation stuff. And then we can leverage this to our automation like that.
And some, you know, I told before we need some. Blackie data and mock data and so on things we can use to generate this data. Right, sometimes if we cannot give the test data management seats set data for this particular scenario they will give us if we ask who generate some common. This class they will give us, we should be able to understand it, how to leverage to our code base.
That is the things, right? And also there are a lot of challenges. We should not forget it. AI is also developed by human with a lot of collection data gathered from the different sources. The AI generated test may lack accuracy that's we should not use it directly. And it is also leading to trust issues. The customer base, we cannot do the full requirement. They are storing. if it is, yeah, so it is a big challenge. also integrating with existing can be, existing system can be very complex. That is also made a challenge.
Kavya (Director of Product Marketing, LambdaTest) - Absolutely.
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - And also AI will give when we are creating test cases, if we ask create the test cases for this particular scenario, AI will give based on their understanding of their resources. But requirement, domain, that's we have. So other challenges is AI can miss the edge cases, the real time customer scenario.
That is also challenging. When we are using, we should be very careful for that. And also the privacy and handling sensitive data, that is very challenging. We shouldn't go there. And also lack of specializing in AI and ML. If you don't have good foundation level, if you don't have a good technical skill.
Yes, you go into AI and ask you to do the things that will not work. Before using that, you should have it tracked. That is the things what I am thinking. So I am using GitHub Copilot, OpenAPI sometimes, but I know how to use it. Before getting the things, I will learn it and I get the background. had a solid I am check with I have solid foundation for the particular scenario, how to draft, how to framework, how to architecture it. So I learn it first and then go to the GitHub co-filer and getting that is to seamless my development time like that. That is the way I am using currently.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much Nilmini, that definitely throws a lot of light and also aligns with what in fact we've been thinking about AI, right? I mean, what we shouldn't forget is that of course it's built by humans, the data has been fed by humans. Of course, while there are multiple benefits, there are also challenges related to using AI when it comes to testing in itself.
And of course, based on your experience, how should agile teams ensure continuous feedback loops to improve software quality? Because I can imagine that with AI, there's more test cases that's been written, right? There are more code being generated and so on. So how should agile teams also ensure that there is a continuous feedback loops to improve the quality of software throughout?
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Yes. There are a lot of things we can do throughout the agile because agile has a lot of ceremonies like backlog, refinement and sprint planning and retrospective, sprint review, and there are so many ceremonies. So we can leverage sprint ceremonies to capture real-time feedback related to the testing activities.
And also, we can perform sprint trend analysis to identify recurring testing issues, gaps, and areas of improvement over time. We can use the sprint health card because we are drawing the burn-down chart and we are collecting the real issues throughout the retrospective like that. And also we can plot key matrices like defect density, defect leakage, and automation coverage to measure the effectiveness of test-referred.
So I can tell you things how we can implement feedback loop via retrospective and so on things. So before starting a retrospective we have to dedicate the segment for testing. So we have to add a specific section, testing feedback, Q-way observations, and like that. There we can tell what we have based during the last print and any bugs found too late.
Sometimes we can find bugs during the end of the spinach because we are hurry to release the things. And we can ask whether we did have a critical automation there. Is it enough? Or it is health or it is hurt from us. So repeated manual effort did we had. So something like that we can always ask whether any environment issues, right?
So throughout that, we can simplify our processes like that. And also when we come to the feedback, there are team members, not only QA, who can also share with us. The developer can say QA did not do this. So, QSR does not support this right. So then we can collect that feedback and we can set the action for that for next spring, like that, if you did not use environmental stable issues, we have a plan for a stable QA environment next split.
If you did see regression is slow or message cases or placid test like that, you should always think about how to do better software automations and how to do parallel testing and how to record the test data. And like that, we can do some access there. And also, we can track the trending sprint over the sprint.
So we can see how the trend is going. There are some matrices like that. And we can think about how they are increasing their things. So like that, we can check that track. Those can we can do, I think, for the improver feedback throughout the Agile Inward.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Nilmini, that's super insightful and also gives a fresh perspective to quality engineering leaders who want to sort of implement a similar feedback loop within their organization or processes in itself. And moving on to the next question, which is tied to the previous question itself. As organizations scale their agile practices, how do you go about maintaining testing standards without creating bottlenecks?
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Yeah, this is also some questions. In my experience, there are a lot of tools, methodologies I tried, but most of them have failed. Then what I did, to think about how to build a common ground for that. So QA or testing centers of excellence and an effective approach to maintain testing standards while scaling agile across multiple teams without becoming moderating.
That is my thought. because you know, since most of us are working remotely now across different cultures, different time zones and varied technical backgrounds, it becomes even more important to create a common ground where we can all align, collaborate and grow together. Right?
That is exactly where a QoS center of excellence plays a crucial role. It becomes the shared space, the neutral ground, where best practices leave knowledge plus. And everyone has access to the same tool standards and learning opportunities, no matter where we are in the world. If we can do that, that is the thing we have to consider.
And I can say a bit about how a QA CEO can help maintain testing standards at scale. We should think about defining governance standards. But still, we have to stay in an agile mindset. We should know agile principles. Throughout the CEOs, we have to sit with lightweight best practices, not the very heavy ones like that. We have to do the best agile test strategy, automation guideline, likewise.
One, we also have to enable the team to engage with the CEO. We throw out that guardrails, not heavy roles, right? So it is not controlled. We are just enabling them to participate in this and use this reusable. Let us know your feedback. If it is not sure, go again and try different kinds of things. And we can do the training and workshops on things. And also COP, know, community of practice. It is the common things and mostly practices.
So we can share the demo and regular knowledge sharing, and how to do test case writing, how to do the test automation. And if you miss something, how can we leverage it again and like that? If you want to use some tools, right? We can discuss there which tool is best for the automation for a particular product, a particular project, something like that. And then we can get this tool throughout that feedback loop, which we have discussed previously. Understood? All are combined together. So I think that is one major thing.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Nilmini, that really throws a lot of insights into how teams can go about creating or ensuring that they are maintaining the testing standards. And of course, I think a lot of times when team members are working remotely, there is, you know, an increased chance of that challenge is coming into the light, for instance, right, but having a centralized, you know, approach when it comes to knowledge sharing, or like best practices, and so on, that would definitely be useful for the quality engineering teams.
And now we are on to the last question, which is basically how your approach to test automation has evolved over the years, and which tools would you recommend for an agile environment?
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Yes, that is my interest in the topic: test automation throughout an agile environment. When we are talking about automation, automation in agile isn't about automating everything; we should understand it. And it's about automating the right thing at the right level to get faster, more reliable feedback. Automation is a team message, not only QA, right?
End of the day, we are delivering a quality product on time, costly. That is the team asset. When it comes to automation, I have my own pillars. It's like automate early, automate smartly. And I highly recommend the test pyramid level as we discussed previously. It is the way we have to think about unit test integration and how we do the automation and stuff. means fail fast and fix fast, that is, we know.
And stable maintainability should be very clean, lightweight and also when you are doing fast-paced delivery, it must have a CI/CD pipeline and also monitoring and control, and maintenance. It is a regular thing, right? That is my own thing when doing the automation. And another section you asked for, the tool selections, right?
So, tool selection actually should respect the product technology while enabling agile practices of fast feedback, collaboration, and continuous improvement. If the product is built on C# as an example, try to stay in the .NET ecosystem for automation, wherever possible. There is a reason why I'm saying it's with skill reusability, maintainability and past, past adoption within an agile environment.
If I give a sample example, when we define the tool in a step, for the C# background, we should call the unit test with XUnit or NUnit. But we recommend X unit because it is a newer feature. And API test, we should go REST sharp. Or spec. I mean, you can use REST Sharp Plus Spec Pro together because it is a powerful API validation. And also UI test. Nowadays, most people are talking about Playwright because it's modern, past, and reliable.
And we can use BDD style with agile. If we have non-technical people, often we have to go with the BDD style. Report in CI/CD, pipeline, mostly goes to Azure DevOps and so on. If it is Java, like that, we can use JUnit or something like that. And the UI test, Playwright, or Java Selenium, API test, can go for it, Azure, something like that. That is a sample example.
And as I said before, the technology stack always comes first because it is non-negotiable. Because if you use C#, we cannot non-negotiable. We have to go for that, I like that things, and also my recommendation for some people in the industry when doing the automation is to plan it carefully.
Otherwise, it is useless. Because we do the automation to smooth the development process. To reduce the manual effort. Do the past best releases if you do not carefully plan them. Our time, effort, cost, and everything will be gone.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Nilmini. That definitely sheds a lot of light on how the team should be approaching test automation as a whole, because automation is what is driving, meaning, when it comes to quality engineering, it's driving all the digital transformation. And of course, the test automation is like an important cog in the wheel for that. So, really helpful insights. Thank you so much.
It has been an amazing and insightful session. Nilmini, thank you so much for sharing insights on how to integrate testing seamlessly into agile workflows. Of course, it's a pleasure to learn about the best practices, strategies and solutions that you've sort of personally implemented across different teams to deliver faster and smarter software. I'm sure that our attendees would have also really gained a lot of perspective from your experience and expertise.
So, thank you once again. And to all our attendees, thank you for joining us today. hope today's session has been super insightful for you in terms of understanding what the actionable takeaways are that you can sort of implement to optimize your agile workflows and elevate soft equality.
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Yeah, thank you.
Kavya (Director of Product Marketing, LambdaTest) - So stay connected for more episodes in the XP podcast series, where we bring you insights from hard leaders and industry experts on the future of testing. Thank you and have a great day, everyone.
Nilmini Ariyawansa (Quality Engineering Architect, KYG Trade) - Okay, thank you. Thank you again. Bye.
Kavya (Director of Product Marketing, LambdaTest) - Thanks, Nilmini.
Guest
Nilmini Ariyawansa
Quality Engineering Architect
Nilmini is a seasoned Quality Engineering Architect with a strong background in software testing, automation, DevOps and process. Previously serving as Head of QA and QA Manager, I have led high-performing teams to implement scalable quality strategies, streamline testing processes, and drive continuous improvement. With expertise in test automation, CI/CD, and quality governance, I am passionate about ensuring robust software delivery.
Host
Kavya
Director of Product Marketing, LambdaTest
With over 8 years of marketing experience, Kavya is the Director of Product Marketing at LambdaTest. In her role, she leads various aspects, including product marketing, DevRel marketing, partnerships, GTM activities, field marketing, and branding. Prior to LambdaTest, Kavya played a key role at Internshala, a startup in Edtech and HRtech, where she managed media, PR, social media, content, and marketing across different verticals. Passionate about startups, technology, education, and social impact, Kavya excels in creating and executing marketing strategies that foster growth, engagement, and awareness.