XP Series Webinar

Flaky Tests from an Engineering Perspective

In this XP Webinar, you'll learn how to mitigate test unpredictability, optimizing development workflows, & enhancing overall product quality for smoother releases and better user experiences.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Boštjan Cigan

Boštjan Cigan

Developer Relations, Semaphore

WAYS TO LISTEN
applepodcastrecordingspotifyamazonmusicamazonmusic
Boštjan Cigan

Boštjan Cigan

Developer Relations, Semaphore

A senior engineer turned into DevRel with a history of working with startups and enterprises (Povio, Sportradar, Automattic, Qloo, Presearch), solving engineering challenges, leading and growing teams and a passion for community building and knowledge sharing (Thinkful, Smart Ninja).

Harshit

Harshit Paul

Director of Product Marketing, LambdaTest

He plays a pivotal role in shaping and communicating the value proposition of LambdaTest's innovative testing solutions. His leadership in marketing ensures that LambdaTest remains at the forefront of the ever-evolving landscape of software testing.

The full transcript

Harshit Paul (Director of Product Marketing, LambdaTest) - Hello, everyone! Welcome to another episode of the LambdaTest XP Series. Through the XP Series, we dive into the world of insights and innovation, featuring renowned industry experts and business leaders in the testing and QA ecosystem.

I'm Harshit Paul, Director of Product Marketing at LambdaTest, and I'll be your host for this session on mitigating flaky tests in the CI/CD suite. Before we begin, let me introduce you to our guest, Boštjan Cigan, Developer Relations at Semaphore.

Boštjan is a seasoned engineer who transitioned into DevRel, boasting a rich background in tackling engineering challenges across startups and enterprises. With a passion for community building and knowledge sharing, Boštjan brings a wealth of experience to the QA community and does so by coaching students as well.

Hey, Boštjan, thank you so much for joining us. So glad to have you here.

Boštjan Sigan (Developer Relations, Semaphore) - It's great to be here.

Harshit Paul (Director of Product Marketing, LambdaTest) - Perfect. So in today's webinar, we'll dive into the world of flaky tests from an engineering perspective. Bosjen will help us get the enlightenment on understanding the root causes of flaky tests, various detection techniques, and effective mitigation, and also management strategies to help keep flaky tests at bay, especially why it's super important to keep an eye out on your CI/CD pipeline while you have all the other things set up.

So we'll start this talk real quick, right? And one question I would like to shoot up, where do you find these flaky tests, right? Where do they come from? And what practices would you recommend to keep them at bay?

Boštjan Sigan (Developer Relations, Semaphore) - So, of course, the answer on a low level is like really simple, right? The flaky tests come from your code and it's not necessarily your testing code. It can be some code that's outside of your tests, and that code is basically dependent upon the test that you are running. And yeah, mostly, it comes from two factors, right?

One is the people factor, which, let's say I was one of the engineers that was part of this, let's say ignore the flaky test concept because I didn't know what they were at some point in my engineering career. And the other factor is basically tech-based, right?

So there are parts of your code that don't necessarily run the way you want them to. These parts are usually tests that are dependent upon each other. So maybe one of those tests has failed and subsequently, your other tests are failing as well because of that test.

But it didn't necessarily fail because of something you did wrong. So one of the things is for instance, like order dependency between your tests, right? So if something fails because of, I don't know, an infrastructure problem or something like that, then your other tests will fail as well.

So this is basically the first root cause you can find. For that, you can like to use mocks and stubs. So a real-life example, for instance, would be if you're communicating with Stripe, and okay, Stripe will almost always be accessible, right?

But maybe if it's some obscure service that you're using in your tests, maybe they have timeouts, maybe they don't work at the moment. And this can basically affect your test suite, right? And because of that, your test will fail.

But it won't fail because of your business logic. It will fail because of some other dependency that's currently not working. And for that, what we do is we provide mocks and stops, right?

And we use those instead. But if something isn't working a lot of the time that you're using, maybe you should also think about replacing that part of your code, right? And basically, that's one of the things, right? One of the other things is tests that are reliant upon timeouts or retries.

So maybe there's a part of your code that communicates with a certain API. Or, for instance, you have web hooks or something similar. And you want to check if that webhook was delivered with some sort of test. And, of course, we have some sort of retry functionality over there.

And we need to have, like, let's say, a fixed number of retries or something like that, and If those fail, then your test did fail, but maybe there's another cause why the test failed, right?

So there's one thing, and another thing is your environment, so I can give you a real-life example. So when I was working on a blockchain project, for instance, locally, my tests would work fine if I was using a locally.

Harshit Paul (Director of Product Marketing, LambdaTest) - Alright!

Boštjan Sigan (Developer Relations, Semaphore) - I don't even remember what it's called, but something that mocked the blockchain, right? And it worked perfectly fine. And when I ran it in the test environment, my tests were failing randomly.

So sometimes it would go through, sometimes it wouldn't. And the solution at that point in time, because obviously I didn't have the time to fix it, was just to re-run them. And they magically went through, right? And this is one of those things that can affect your test suits as well.

So inconsistency between your environments. So you have to make sure that your local environment staging testing, whatever it is you call it, needs to be consistent. And this is also a root cause of flakiness, right?

And yeah, of course, I talked about dependencies there, like you need to isolate your tests. You don't want a lot of dependencies between them because that can cause havoc. I'm sure you can also tell me about war stories regarding that as well.

Harshit Paul (Director of Product Marketing, LambdaTest) - I'm pretty sure that the audience would be relating it to their own dark side of the episode around flakiness. Pretty much everybody encounters it at some point on the other end. It has been a headbanger, honestly saying.

But yeah, I am pretty sure that that happens with somebody. You know, at some point in life, if you are in the QA, you're bound to face them.

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, yeah, of course. But yeah, basically, I mean, just to put it simply for your code base, you need to ensure to put it in a ChatGPT way, deterministic behavior. And that basically means your code needs to be concise. It needs to follow the same scenario the whole time you're running your testing suites.

And, of course, just to summarize it always use mocks and stubs and make sure you mock any external dependency that you have because that could lead to flakiness if that external service doesn't work. And that can also be like an internal service as well, right?

So if you're in a microservice-based architecture, maybe some of the microservices currently aren't working, or someone is fixing something on them, right? That can also lead to your test failing, and then yeah.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right! Speaking of failing, how do you differentiate between genuinely failing tests versus flaky tests using your detection techniques? What are some key metrics or indicators that you might be using for this purpose?

Boštjan Sigan (Developer Relations, Semaphore) - Yeah. You know, this section is a bit of a black hole, right? So there are a lot of techniques that you can use. The basic one is basically the re-run strategy. That's like the dumbified way of checking if any of her tests are flaky, right?

So, for instance, just like I was talking about it before, you run your tests locally, they work, you run them in your testing environment, they fail, and you're like, okay, let's try re-running them and then they magically work and you ignore the error and go on.

So one detection technique is basically the re-run strategy. So you re-run them and see if they pass or not. So for a test on the same code base, if it, like, once it goes through the second time, it doesn't, and then it again goes through, doesn't, doesn't, doesn't, that's something you need to have an eye on.

So that's one of the things that's like the pure behavior of flakiness, right? Um, and the other thing is historically checking what was going on with that same test through other commits, right? So if that test hasn't changed, maybe on, on a different branch, it had like a different behavior.

So here we are again in that dependency analysis mode, right? So if your tests are dependent, then this can happen. But yeah, one aspect of detecting them is also like using machine learning techniques. I'm going to be quite frank.

I don't know a lot about machine learning techniques for flaky test detection, but of course, everything is reliant upon the data that you have. So, for instance, I'm sure LambdaTest collects a lot of data about tests, right? Throughout running them all the time. And that's the data you can use to then like detect if a certain test is flaky or not.

Harshit Paul (Director of Product Marketing, LambdaTest) - As a matter of fact, at LambdaTest we do have a proposition that comes as Test Intelligence which helps you deal you know with these flaky tests you can actually see the trends at which your tests are failing your tests are even categorized based on the error classification.

So you're able to deep dive and see that historical trend, okay, this passed one time in a week failed three times in the same week the next week it's floating at, you know, the pass rate of like four times and then those false positives and negatives you can measure using test intelligence at LambdaTest. So yes, that happened.

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, exactly. It's all about the data around your test suites. And speaking about this, there's also, I haven't talked about time-based analysis. So, for instance, one test can run like, I don't know, let's say, 30 seconds. I'm making it up. But in another instance, it could run like 10 minutes and not go through. This is also something that could be flaky.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yes, Right!

Boštjan Sigan (Developer Relations, Semaphore) - But again, this is that black box area I was talking about, right? Maybe some part of your infrastructure is currently working very, very slow and the test that's dependent upon that, well, it's just needed more time to run in that case, right?

So that's why this part, so time-based analysis needs to basically connect to everything we've talked about before, right? So it's important to have all of the data at your hands to detect if a test is flaky or not.

And maybe your analysis will fail. That's why you need to look through your history to see if a test is flaky or not. So in one scenario where you've only run this suit once or twice, maybe, you can't necessarily say, hey, this is like a flaky test, right?

Because maybe there are other factors involved. That's why this whole area is basically a black box. And that's why machine learning is a good tool to use to detect if a test is flaky or not, right?

Harshit Paul (Director of Product Marketing, LambdaTest) - And I believe that is why I believe that is why you pressed on the logic of re-running the test, right? Because the more data you have, the more you're able to swiftly pinpoint. Okay, this is why it's happening because it depends upon how much volume you're seeing in terms of failure or pass rate, right?

So of course that helps. You also touched briefly on test environment, stability, right? And infrastructure. So would just like you to touch more around that area as to the role of the environment stability and infrastructure reliability in terms of reducing the occurrence of flaky tests.

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, so I, for this area, I can like present my expertise around how this affects your CI/CD process. But basically, if it affects your CI/CD process, then it affects your environments, right? Cause you always want to deliver the latest code or want to have the latest QA environment for your testers to test on, or maybe for automated tests to be run on.

So if you have, like, maybe just one flaky test in your test suite, who cares? It's going to fail once. Maybe you'll read on that pipeline and it will go through. But for larger apps, you can have thousands of tests.

And if a few of them are flaky and present this sort of behavior, sort of working, not working, and you start to re-running your workflows, this will basically take your time. So your build time is away from you because you will be re-running your tests all the time.

Harshit Paul (Director of Product Marketing, LambdaTest) - Hahaha!!!

Boštjan Sigan (Developer Relations, Semaphore) - And if you're taking away time, you're taking away money. It depends on what you're like running your pipelines and workflows. Right. And it's not just time and money. It's also the engineers that are working on these projects.

So if they can't get those instant feedback loops, right? And I don't know some sort of feedback, whether their code is performing well or not, then yeah, that leads to massive frustration. Right?

And that's why you basically need to eliminate flaky tests from your test suite as soon as possible because this will basically lead you into a process where you spend less time and less money for all of the workflows that you're doing, right? Because big companies, don't like to deploy once per day. Each commit that goes in, you know, the pipeline runs; it takes time.

And if I don't know if we are talking about a company that has like a thousand employees and 500 of them are engineers and they do at least two commits per day maybe or one that's like 500 workflows being triggered, right?

And if 250 of them just say, oh, we're not going to pass. And then they re-run them. Well, that then leads to losing time again and money.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right! And I believe like a faster feedback loop, as you just said, right? It's super critical. And I believe everybody's aware that, you know, the later you find a bug, the more it's gonna cost you. It's obvious, right?

And speak, you also touched upon money and time, right? And the life of a modern QA, let's be honest, it's never easy, right? With all the existing things to keep in check and the new requirements coming in on a weekly, biweekly basis, right? There's a lot to be done.

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, yeah, exactly.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right! So how do we prioritize and address flaky tests on top of that? Right? While being time-sensitive and, you know, of course, I mean, first, of course, you need to like to know which of the tests are actually flaky.

And the, I'm not gonna say maybe the stupidest technique, but let's say the easiest technique to use. So if you're in a rush and you really need to deploy that code or test it out, one of the approaches is to actually isolate those tests.

So the ones that are flaky, just isolate them. But you also need to be cautious about that, right? So if it's some important part of your business logic, right? Like, I don't know, I'm making stuff up as I go along, but let's say that you're checking if some sort of money transaction is going through or not.

Obviously, that's important. You can't just comment it out and isolate it, right? But if it's like a stupid check, okay. And when I say stupid, I'm saying it in quotation marks, right?

Boštjan Sigan (Developer Relations, Semaphore) - If the color on the interface is yellow, when I press the yellow button, OK, sure, we can isolate this one out and find out later why this is causing us flakiness. And yeah, isolation is basically the first thing you can actually do. And of course, it really depends on if you're in a rush or not. So isolation is good.

Harshit Paul (Director of Product Marketing, LambdaTest) - Hm-Mmm! Right.

Boštjan Sigan (Developer Relations, Semaphore) - But usually what you need to do, so if a lot of your tests are flaky, you need to find out why they're flaky. And then, usually, dependency scanning comes to mind, right? And code analysis, if tests are dependent upon each other and stuff like that. And, of course, you can use external tooling for that, right?

So SonarQube is quite good at scanning your code giving you suggestions, and also analyzing tests. And of course, you also, each big company that has larger test suites, uses something either you guys, right? Or I don't know, the other solutions that are out there like Cypress and stuff like that.

You need to look at those tools, and their suggestions and basically eliminate the flakiness of your tests. So usually, the root cause will be the code itself. So you need to make a plan and strategize around that.

So the tests that are, so if the flaky test is some really important business logic, like the money transactions I was talking about, of course, you need to put this as a higher priority before any features are released and eliminate that as soon as possible.

So yeah, it's all about strategizing. I mean, there's no common approach or magical tool that can do that for you, but you need to do this from the start. Basically, you need to be aware of what flaky tests are and what they do to your team and start from the beginning.

So each new project you do, be aware that this exists and how to eliminate it from the start. But if you're deep down into something, at least start eliminating those that are causing those big-time eruptions.

Harshit Paul (Director of Product Marketing, LambdaTest) - Mm-hmm. Got it.

Boštjan Sigan (Developer Relations, Semaphore) - So, for instance, if some test takes more time for the test suites to go through, start eliminating that one before the other ones. For those that are time-sensitive and have the highest critical rate, you should solve those first. And yeah, of course, depends on how your workflow is, right? But yeah, usually, you start with the ones that are the most critical, right?

Harshit Paul (Director of Product Marketing, LambdaTest) - I mean, it is a flaky test and you can't expect them to be consistent. So the handling method is also going to vary from business to business cases. But yes, of course, prioritization, as you said, cannot be something you can do. I mean, it has to be done based on your business scenario. What exactly is failing is something you need to define a priority for at the end of the day.

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, of course.

Harshit Paul (Director of Product Marketing, LambdaTest) - And speaking of implications of flaky disks, what do you think are some of the implications on software maintenance and technical debt?

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, so of course, for software maintenance, it basically already relates to the answers we were giving before this question, right? It's basically the same. So the more flaky tests that you have in your tests, the harder it's going to be to maintain them. I mean, maintain your code because this basically means that, as we've already talked about, your feedback loops are slower.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. Mm-hmm.

Boštjan Sigan (Developer Relations, Semaphore) - Any process that you do is slower. Any new code that you deploy is slower. And this is the biggest bottleneck that you have. It's a vicious cycle that just doesn't stop. And the longer you keep ignoring it, it's going to grow exponentially. Because let's say that you have a test that is flaky, and your test suite runs for 40 minutes.

But that's like a really large one. And maybe one of those tests is flaky in there and your test would fail, and you re-run it, and then it magically goes through. You've just spent 80 minutes deploying something, right? And this is something you really want to eliminate, not to be a part of.

That's why we wanna remove this as soon as possible because it really has a high impact on how you do your work. And at Semaphore, basically, when we were doing an analysis of this, right? So because we are focused on CI/CD, we provide a CI/CD platform for you. We do have a lot of metrics in the background.

So how fast are your pipelines? What are the average times? What are the bottlenecks, stuff like that, right? And we were checking it out. So for instance, test suites that go through, usually you start with optimizing those, right? And you try to keep the times of your tests minimal.

So you want to optimize them. But in the end, we figured out that those optimizations might lower your times. But flaky tests are the biggest bottleneck in the end. Even if you fix some sort of test that's like really slow or does some calculations, and you optimize the algorithm behind it, it's faster.

Well, that's nothing compared to having flaky tests in your test suites because this is like a massive bottleneck. It stops your pipeline. You need to re-run it. You need to re-run your test suites. It has a way bigger impact than any of the other tests that you want to optimize or solve.

So that's why this is a pretty important thing. And that's why we decided to tackle it, at least on a CI/CD level, to give you an overview.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right, so considering how alarming these flaky tests are, right? What is, you know, how do you find these flaky tests is the biggest question.

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, so finding them, we've already talked about this, using certain tools that help you find those flaky tests and using the data that helps you find them. But yeah, I can showcase this part since you've posted the question of how our dashboard helps you find these tests.

And let's go into, hopefully, a successful screen share mode. So, okay, this is like our flaky test dashboard, right? And, of course, here at the top part, we have certain filters that allow you to filter out the data that you have. And I won't be doing a deep dive into this.

So the important things that you need to know are the disruptions and the number of disruptions that the certain test causes. And, of course, you can order them here, right in our interface. And the disruption history as well.

So an important metric is how many broken builds you've had. And that's like, you know, this was one of the root causes we were talking about. This is something that like really, really gives you like, you know, a lot of time waste, right? And here in this dashboard, you can see how flaky tests affect your build.

So this is like a, let's say, fake scenario, right? And you can see that the more flaky tests that you have, the more broken builds you have. And these broken builds really have an impact on how much time you're spending on your CI/CD builds, right? And yeah, this is just one tool that you can use to detect flaky tests.

So we're focused on giving you like an overview of which of your tests are failing, how it affects your CI/CD pipeline, your build times, your pipeline performance, your workflow performance, stuff like that.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right.

Boštjan Sigan (Developer Relations, Semaphore) - And here in our interface, you can basically just do actions. So create a ticket for each test, which links to either a result that you have in some external tooling, like LambdaTest, for instance, or maybe a Jira ticket or something. Then you can mark it as solved once you've solved it. But the important metrics are also inside a specific test.

So you can do a deep dive, and you can see in the last 30 days, for instance, for this certain test, you can see how many disruptions it has caused, what's the impact on your test suite, and what's the pass rate of your pipeline because of this flaky test. And you can also see the duration as well, but this is the important data.

And when we were talking about how to plan on solving them, this is just one of the things that can help you out. So our dashboard doesn't give you insights into, hey, this test was flaky because it has an external dependency or stuff like that. But it gives you an overview of which tests impact your CI/CD process the most.

So the tests that obviously have a lot of flakiness in them will affect your pipeline performance and also your CI/CD process. And this is like what our flaky test dashboard focuses on. So what kind of effect does it have on your pipeline performance and, consequently your workflows as well, right?

Because at the end of the day, your software delivery is dependent upon you having a successful pipeline that basically delivers this software to your customers, right? And yeah, as said, this gives you an overview. And if you want, of course, you also need to use external tools that give you maybe more insight into each test, and why it performed poorly.

Harshit Paul (Director of Product Marketing, LambdaTest) - Alright.

Boštjan Sigan (Developer Relations, Semaphore) - Does it have any static dependencies or stuff like that?

Harshit Paul (Director of Product Marketing, LambdaTest) - And then, of course, at the age of AI, we have been incorporating a lot of features with the help of AI to help mitigate flaky tests with folks who are running their automation suits across the LambdaTest platform.

One such feature is, of course, AI-based RCA, which gives you an automatically generated root cause analysis, of course. And that has been a major productivity booster because, a lot of times there are these humongous test suites.

And a simple syntactical error could also take up a lot of time for somebody to dig down as to where it happened and why it did. Right?

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, of course. As I've told you, our focus was, you know, so, for instance, when you have like a lot of external tooling in when you're like analyzing your code or doing your code or whatever, usually if you have big engineering teams, sometimes you just let things slide along if you're in a rush, right?

And in this case, our focus was just to give you an overview of your performance, right? Of your pipeline performance and not kind of develop a feature that fixes the code on its own, right? Because we're focused on this part. That's why we have you guys.

So, because usually when people trigger their workflows, they check their pipeline performance, if they went through and stuff like that, right? And if you have like an eyesight into what's happening, so if you can just click on flaky tests and see this overview, you can see.

Harshit Paul (Director of Product Marketing, LambdaTest) - Mm-hmm. Right.

Boštjan Sigan (Developer Relations, Semaphore) - Oh, okay. Now guys, we need to start solving this cause I can see that we're spending too much money, too much time, and too many resources just because some of these tests are behaving randomly, you know? And yeah, in the age of AI, of course, you can like use a lot of code scanners and stuff like that that give you suggestions on why something might be behaving as badly as it is.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. And, of course, those visual representations are extremely critical, right? As you also showcased in the dashboard, you see that over time, this is actually amounting to something that you've been ignoring in the past and is actually leading to a lot of extra waste in the future, right? Considering the other test being failed at the same time, right? So, of course, that helps.

Boštjan Sigan (Developer Relations, Semaphore) - Yeah, people don't like to read, especially developers, because, you know, each day, and I'm speaking from a personal perspective, right? You always need to learn something new. You always need to read some documentation.

And if you like to run something, a workflow, and it gives you like a huge amount of text and insight in there, you see something red that says this test has failed, you usually tend to ignore it because it's a lot of info. But yeah, as you've said, if you have a visual representation.

At Semaphore, we really like doing this because we know that it's not just developers, engineers, you also have like decision makers that want to have insight into what is happening all the time. We like doing these visual representations, right?

Harshit Paul (Director of Product Marketing, LambdaTest) - Right.

Boštjan Sigan (Developer Relations, Semaphore) - Because from these, you can actually, like, in 10 seconds, maybe see what's going on, get instant feedback, and then you're going to let's do a feature freeze now because this, this isn't going anywhere. We're wasting time. So let's stop doing features. Let's focus on fixing flaky tests as soon as possible.

Harshit Paul (Director of Product Marketing, LambdaTest) - That makes absolute sense. 100 % visibility to all stakeholders at all times. That is the major perk. So we talked a lot about handling flaky tests and managing them. But while we wrap this up, I would just like to ask your top advice for teams to mitigate or handle flaky tests efficiently.

Boštjan Sigan (Developer Relations, Semaphore) - To be honest, if you re-watch this recording, that's basically it, right? I mean, we could go through it in a summarized sense, but in the end, I think it's important that teams are aware that flaky tests exist.

Because usually, okay, but this is only based on my experience, right? Working with several teams. If you're doing a certain project for a startup, right? A prototype or something.

You don't think about flaky tests and stuff like that, right? You think about doing the tests, and then they run. I think it's just important to be aware as an engineering team that flaky tests do exist and what they actually are.

Because some engineers don't know what flaky tests are—they don't even know that they exist. And this is like the first step, right? Before even doing a deep dive into all the technicalities and how-tos and what-not, right?

Harshit Paul (Director of Product Marketing, LambdaTest) - Mm-hmm. Right.

Boštjan Sigan (Developer Relations, Semaphore) - The first step is to let your engineers be aware that this is a problem. It does exist and it can really kill your performance long-term, right? But yeah, that's basically it.

So this is the human factor I forgot through this talk, right? It's like quite important that people are aware of that this is a problem and can like have a lasting impact on your software delivery process and your testing process as well, right?

But yeah, all of the other things that we've mentioned, make sure that your tests aren't dependent upon each other. Make sure you use mocks and stubs for external services.

And yeah, basically any article that you're going to read that's oriented towards code for flaky tests, they're going to give you answers on what kind of tests you should write, what kind of code you shouldn't write so that your tests won't be flaky in the end.

I mean, you're not going to avoid them 100%, but it's important that you minimize them from the start. Otherwise, later on, you're going to devote, like depending on the number of your flaky tests, months of engineering work, especially if you have a lot of dependencies, right?

Harshit Paul (Director of Product Marketing, LambdaTest) - Right, right. That makes sense. And this is from a personal curiosity. Do you think joint reviews would be of help while you're doing a code review where QA and devs are both looping in at the same time and doing a joint review? What is your take on that from a flakiness mitigation perspective?

Boštjan Sigan (Developer Relations, Semaphore) - Yeah. From this perspective, it would make total sense, right? For instance, I'm speaking from my experience, right? Usually, developers see QA as this evil team that tries to destroy their code and there's like a bunch of memes around it on this planet, right?

But I think, yeah, QA gives you like, another perspective. So as an engineer, you're focused on delivering that feature or solving a bug or whatever, right? But it's the QA person or any other role that's oriented around this that gives you that insight.

So sometimes, you don't think about something that the QA would think about. And it's like this perspective that can like give you a boost. So yeah, of course, code reviews with QAs would definitely like help in this case.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right.

Boštjan Sigan (Developer Relations, Semaphore) - Especially if you're like starting with an engineering team that's maybe, I don't know, a lot of juniors and maybe two seniors or something like that. It would be like really beneficial to involve a senior QA in the code review process as well.

Harshit Paul (Director of Product Marketing, LambdaTest) - And that would be all the questions from my end. Thanks a lot, Boštjan, for joining us today and talking about the flaky test, which is so vague in nature, yet we had such a crisp, to-the-point session.

Thank you for taking time out of your busy schedule and joining us. And for all those who are listening, you can Subscribe to LambdaTest's YouTube Channel for more insightful XP Series episodes like this one.

Boštjan Sigan (Developer Relations, Semaphore) - And, of course, if anyone wants to try out the dashboard that I've shown, feel free to sign up at Semaphore, test out our CI/CD, and also test out the flaky test dashboard that we have. And yeah, I'm available on social networks and we'll have the description down below, right?

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, and we'll also be streaming. And yeah, you covered that part for me exactly, right? So we'll be sharing social handles for Boštjan. Of course, you can find it in the description channel of the videos. And by all means, feel free to connect with Boštjan and me over LinkedIn if you have any further questions about this particular talk.

See you in the next episode of our LambdaTest XP Series, where we talk about the latest trends, innovations, and conversations from the world of testing and quality assurance. Thanks, everyone, for tuning in. This is Harshit and Boštjan with me, signing off. Until next time, happy testing.

Past Talks

Testing Tomorrow: Unravelling the AI in QA Beyond AutomationTesting Tomorrow: Unravelling the AI in QA Beyond Automation

In this webinar, you'll discover the future of QA beyond automation as he explores the realm of AI in testing. Unravel the potential of AI to revolutionize QA practices beyond conventional automation.

Watch Now ...
Shifting Accessibility Testing Left with LambdaTest and EvincedShifting Accessibility Testing Left with LambdaTest and Evinced

In this webinar, you'll learn to pioneer accessibility testing's evolution by shifting left with LambdaTest and Evinced. Uncover strategies to streamline workflows, embed inclusivity, and ensure a comprehensive approach for user-centric testing.

Watch Now ...
Building Products that Drive Better Results with ShortcutBuilding Products that Drive Better Results with Shortcut

In this XP Series Episode, you'll explore the keys to creating impactful products with Shortcut. Unlock strategies for enhanced results, streamlined development, and innovative approaches to building products that drive success.

Watch Now ...