XP Series Webinar

Faster Feedback with Intelligent CD Pipelines

In this webinar, you'll delve into the heartbeat of modern software delivery and will learn how to optimize your CI/CD pipelines for faster and more efficient feedback loops.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Eric Minick

Eric Minick

Director of Product Marketing for DevOps Solutions, Harness

WAYS TO LISTEN
applepodcastrecordingspotifyamazonmusicamazonmusic
Eric Minick

Eric Minick

Director of Product Marketing for DevOps Solutions, Harness

Eric Minick is an internationally recognized expert in software delivery with over 20 years of experience with Continuous Delivery, DevOps, and Agile practices working as a developer, marketer, and product manager. Eric is the author of “Application Release and Deployment for Dummies” and is cited in books like “Continuous Integration”, “Agile Conversations,” and “Team Topologies”.

HARSHIT PAUL

Harshit Paul

Director of Product Marketing, LambdaTest

Harshit Paul serves as the Director of Product Marketing at LambdaTest, plays a pivotal role in shaping and communicating the value proposition of LambdaTest's innovative testing solutions. Harshit's leadership in product marketing ensures that LambdaTest remains at the forefront of the ever-evolving landscape of software testing, providing solutions that streamline and elevate the testing experience for the global tech community.

The full transcript

Harshit Paul (Director of Product Marketing, LambdaTest) - Hi everyone! Welcome to another episode of the LambdaTest Experience (XP) Series. Through XP Series, we dive into the world of insights and innovation, featuring renowned industry experts and business leaders in the testing and QA ecosystem.

In today's session, we'll delve into the heartbeat of modern software delivery and faster feedback with intelligent CD pipelines. I'm Harshit Paul, your host and Director of Product Marketing at LambdaTest, and joining me today is Eric Minick.

Eric brings with him a wealth of experience spanning two decades in continuous delivery, DevOps, and Agile practices. Eric is an internationally recognized expert in software delivery. Eric has donned multiple hats as a developer, as a marketer, and as a product manager.

He's the author of the book, “Application Release and Deployment for Dummies” and has made notable contributions to industry-defining books such as “Continuous Integration”, “Agile Conversations”, and “Team Topologies”.

Currently, Eric contributes his expertise to harness the product management team, bringing innovative solutions to the market. Hey, Eric! Thank you so much for joining us for this episode. It's a pleasure to have you here.

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Hey Paul, very excited to be here. Love this topic. Over 10-15 years ago, we had builds and tests. We had feedback coming back in five or 10 minutes. And a team I was working with, like two years ago, it took 20 minutes to do the build. And both times, it was Java apps.

So I feel like this is an area that's been important for a long time. And somehow, we've managed to get worse at it for a lot of teams. So at the same time, we know that, you know, we used to deploy to production every year, maybe every quarter.

And now we've got a lot of teams who deployed to production multiple times a day. So we know that there are folks doing it right. So happy to talk about this. This is fun talk.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, and I mean, speed is something that everybody is chasing out there, and everybody is struggling at the same time when it comes to CI/CD. So I'm pretty sure folks would be interested to know how we are bridging this gap and your expertise would help us definitely get to the right parts of it. So you know, speaking of rapid feedback, why is it so critical in the first place, Eric?

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Yeah, you know, I think there's a couple ways of looking at it. One, if the feedback is all good, right? If everything's perfect and the tests all pass, we had a good day. Well, then we've got some innovation that we want to give to our customers as soon as possible.

We have something valuable to make money from it. We want to delight our customers, whatever that is. And so getting that to our users more quickly is awesome stuff. On the other side, if there's something wrong, the sooner we let developers know about it, the better it's going to be fresher in their minds.

They're not going to be starting something new and then have to put that aside, work on the old thing, come back, and try to remember it again. The quicker we can get that feedback, the easier it's going to be to get it fixed and the less damage we're going to end up doing to everybody.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, that makes perfect sense and what do you think is the basic strategy for accelerating feedback?

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Yeah. So I think automation shouldn't be a very controversial idea, but the more we can automate, the better. Right. And so if as soon as the code has changed, we can get that built, we can get it tested. We can notify people and remove as much friction as possible.

And for a lot of teams, I see that process being multiple pipelines. I've got a build pipeline, a deployment pipeline, a test pipeline, and another deployment pipeline. And each of those has a button click between them. And so if we can chain those together, that'll be a lot better. We wanna be precise in our testing.

So when we make a change, let's test that change. And the quicker we can align our tests to what changed and not what didn't, the better. And the way I like to think about this process thing for a team is to use an idea called “Null Release”.

And the idea with a null release is to imagine what if the smallest possible change was made to our application, right? We decided, you know, that login should be two words, not one word or something like that. We need to change the text on a button somewhere. Like, what does it take to get that to production?

For a lot of teams, it's like, well, I would need to make the code change. Maybe someone else would need to change the test that validates login, because they've got to look for a different link, a different button name. And we're going to go through that process. And our tests only run nightly.

So the soonest we'll learn about whether that work is a day or two. And then we only release every other Tuesday after a change advisory board meeting and et cetera, et cetera. So the quickest I could change this text is in three weeks. OK, what would it take for that to be one week, a day, or an hour?

Because there's not a lot of risk in this change. So start thinking through how you smooth that out for a really small change and then consider, well, what needs to come back in as we think about bigger and scarier changes. I think it's a really good process for teams to go through.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right, that makes perfect sense, actually, but you talked about a really interesting point back there. You talked about chaining different pipelines considering everybody has so many pipelines put together.

So I would just like you to talk a little more about how it can impact your feedback loop. When you chain pipelines, how does it accelerate your feedback exactly?

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Yeah, so if we're going to do any sort of functional testing or performance testing, at some point to test our software, we need it running somewhere. And so often, we have different tools for build versus deploy. And they've got their own pipelines.

So oftentimes, the build process is fairly automated. You commit your change in Git, the pull request is accepted, and the build happens. We run our unit tests, we're able to get rapid feedback there. And then it's done. And someone needs to say, okay, now it's time to run some functional tests in a test environment.

Then they have to go to a tool, login, and click a button. That time between when the build is completed and when someone decides to click the button and then do the work of clicking the button, as little as that is, is completely wasted. There's no need for that. We could have an event fire from one tool over the other, saying, okay, build done, go deploy, or a deployment tool.

There are lots of ways of doing this. The idea is you want to connect your pipelines and chain them together so that if the build passes, if we're meeting whatever criteria we should have to say this is one we'd want to do a functional test on, it's got to at least compile.

But if we want to run these tests, then let's just run the test. We shouldn't wait for a person to say, oh, OK, now let me do it. That's just a really easy way to eliminate work. That's not really high-value work. And speed up the feedback. Just stitch it together, make it smooth, make it fast.

Harshit Paul (Director of Product Marketing, LambdaTest) - That makes sense. Thanks for clearing that up for me. And you know, you talked about null releases as well. And there's also another interesting aspect that comes to this question, which talks about dummy tests being a part of the picture. So how can tests be avoided effectively to skip these irrelevant tests? How does test avoidance play a part in expediting your CI/CD pipeline?

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Yeah, so if we know that all we changed was the text on the login button, something small like that, then we could fairly safely eliminate 99% of the tests we're going to run, if not more. Because we know that it's unchanged. And whatever test results we had last time are going to be the same this time.

So if it takes 20 minutes or an hour or six hours to run through all of our tests, functional and performance and security, all these things, we say, well, we know those aren't relevant. Then we could just not run them. And I think that gets tricky, right?

Because in the real world, we tend not to just change the text on the login button. We're changing the behavior of some method within some service that's used by stuff, right? And so it's a little complicated. So I see strategies like, I know the basic areas of my product, and I'm going to create test suites for every area.

And then we can choose based on where the code changes are, I'm going to run this sweep and not these other sweeps. And that's OK. It does require some decision-making by an engineer somewhere. But maybe a little bit of decision-making is worth not running half our tests or two-thirds of our tests. This is also an area where vendors are doing a lot of really good work.

There's also an area where vendors do a lot of really good work. So at Harness, one of the approaches that we take around unit testing is something we call Test Intelligence. So we'll go ahead and deeply analyze the code base, look at all of the call paths, and understand the call graph.

So we know that if you've changed this method, what other methods are calling it, and what tests access into that same call path? So then, at test time, we'll just execute the tests that are relevant to the code changes in this build and leave the other unit tests un-executed because they're not relevant.

That'll help you cut out anywhere from half of your testing time at build time to 90, 95% depending on the change. So that can be really impactful. There are other players in the market doing this. One dedicated test oriented company called Sealights and others as well.

So this is a place where I think the commercial tools can get a lot more sophisticated, but even in a, uh, just roll it yourself sort of approach, you can at least define your test suites and be a little bit intelligent about which suites you execute and cut out half the time pretty easily.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. And as you talked about, you know, vendors also helping and shipping in, this happened to be one of my favorite areas to chip in as well. So at LambdaTest, we also help with test intelligence. We have that platform, which will help you unify all your test cases and analyze them with the help of AI.

And it helps you basically categorize all the different sort of sorts of errors that you get. So you'll be able to see what, and which kind of errors are popping up. And at which pace and at which percentage? You'll also be mitigated, you'll also be able to mitigate flaky tests, and the platform does it for you.

So it gives you a flaky test trend. So you are able to understand, all right, these are the times at which, you know, my tests are being bothered by flakiness and flaky test, you know, nobody needs them on top of everything they have on their plate, right? Add uncertain delays. So that was one of our top priorities to make sure we try to put that into the picture as we talk about presenting a unified testing solution to our customers.

Right. So, at LambdaTest, well, we have test intelligence, which can help you not only mitigate flaky tests but also understand what kind of errors you are getting and what you'll also be having; there's a lot more to talk about. Automatic healing with the help of AI, one-click RCA, and whatnot.

So, by all means, go ahead and check out LambdaTest Test Intelligence in case you're listening to this and haven't done that yet.

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Flaky tests are such a great example, right? Because your test suite failed because a couple of flaky tests failed. And then you get to have a meeting about it, right? Like, our tests failed. Can we go to production? Well, no. These are actually flaky tests. And maybe we should.

And that's not a good use of anyone's time. And it slows you down. And it interrupts everybody. All for a meeting about tests that no one believes in. It's a bad situation. I love that you guys are going after flaky tests. So important to being smooth and fast.

Harshit Paul (Director of Product Marketing, LambdaTest) - This means a lot coming from you, Eric. Thanks a lot. And speaking of which, you know, we talked about bugs popping up. The later they pop up, the more problematic they become. You talked about, you know, if somebody's figuring out some problematic bug after three weeks, why not do it in the first week itself?

Or if you're doing it on a weekly basis, why not do it earlier? You know, speaking of testing in the early stages, there's also the part that talks about testing in production, right? You can't test everything on stage and be 100% yes, I'm done.

So how do testing and production play a part in optimizing feedback loops? What challenges do you see, and how do you address them?

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Well, I think as soon as we start talking about testing and production, blood pressure starts to rise in a lot of people. That's really reasonable. And I think you need to understand your application, right?

For how appropriate this is. I've worked with teams who are building software that runs inside devices implanted in somebody's spine, right? I don't want them testing in production. If it's medically critical, health critical, okay, maybe this isn't the right strategy. And if it is, I want to learn about that.

But if it's a consumer application, a mobile app that's fun, or helps people shop, well, maybe carrying a little bit of risk out in the production is OK, a great exchange for innovation speed. And really, production is the only place that matters. You know, going back to my login button, let's imagine that, you know, It was gray, and not enough people were clicking it.

And some smart product manager was like, make it blue. And the developer messed up and made it green. But if it goes out in production and more people are clicking on it, and we're delighting our users in all of that, keep it in production. We can go back and change it to blue later. But we don't need to roll it back.

We don't need to say, don't deploy this. It's delivering the right business result. And the business result is what matters. And a lot of what we're doing when we're testing is trying to avoid negative business results. If you throw a lot of errors in your customer's face and your user's face, it's usually going to have a bad result. So let's not do that.

So I think there's a lot of value to that and some performance characteristics. You can do performance testing, but you're only really going to know when it's real users in production. Some elements of, like, does this actually meet the business need that you're going to find out from your users?

You can find out if it does what the product manager wanted before you go to your users in terms of does it meets the spec. We can do that. But we only know if it's good with users. And so the most important, most valuable feedback comes from production.

And so, the question is, how do we get to production reasonably quickly and safely? And then knowing that no matter how good our tests are, it's not perfectly known until we're in production. That implies a lot about how we look at things in a product. So there are a lot of approaches to this problem.

All right, so one thing we would do would be to incrementally deploy the software into production using something like a Canary deployment. And the idea with a Canary deployment for anyone who doesn't know is that we deploy. If we've got 10 nodes running our software, 10 servers, or 10 things, right?

We deploy the new version of the software to only one, and then we control the traffic to that one, and we put only a handful of our users out there. And then we watch it. Are the error rates on that node that's new, are they as low as what was there before? Are they lower? Does it look healthy? Do our customers look like they're still giving us money? Are they getting their job done?

Whatever sort of telemetry we can pull from production, we can say, is this meeting the business need? Is this working for me in production? And to the degree it is, we can put more and more traffic. We can deploy to more and more of our nodes and roll this out sequentially. And you can automate these checks so that every time you're making a change in production, you're doing it in a small way first.

You're checking that it's working, and then you're rolling it out at scale. And that really limits the blast radius and the danger of putting things in production. And whether you say reading your logs and checking your observability tools as part of a deployment is testing in production, I tend to think it is.

I think the other angle would be features go into a situation where you've got feature flags in place. And so you can turn on and off individual capabilities of your software. Again, you would monitor the behavior of your users in interacting with the software with the feature flags on and off. And so if a new feature, we changed the text of our login button, we changed the color of it, whatever that is, if the user behavior isn't as good, we don't even have to roll back.

We just say, well, turn that feature off. And then we have the old behavior. And we've really minimized it. So this gets to very much a business sense for our testing and our validation that what we're changing is making the software better, what we're trying to do. And it's leveraging our customers as part of that.

But doing that in a really responsible way, right? Again, if fewer people are clicking the login button, that's unfortunate, and we can roll that back. That's fine. If it's medically critical, we don't wanna be like, well, we just killed somebody, oops, let's turn the feature flag off. That's not an acceptable sort of behavior.

So there are different levels of risk. I do think that most organizations tend to think that the importance of their thing is a little higher than it really is. And so I'd encourage being a little bit aggressive.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, and that's a really interesting way to put into perspective is that when you speak of canary testing, you're actually validating things, and you're breaking the bias. I don't mean to sound offensive, but of course, there's a lot of research put together by the product owners when they try to ship something new, be it an aesthetic change or be it a functional change.

And you want to make sure that whatever you've shipped, you want to sort of validate and either make or break on top of your bias. There's always a bias involved. And figuring out whether you want to go ahead with candidate testing or feature flags, as you said, can actually help folks decide the right way to deploy, the right way to deploy the features and whether they're working out for them or not.

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Yeah, can I build on that real quick? I think your discussion of bias is so important. And that's we had a good idea and the product manager really likes their good idea. And the engineer worked hard to make it real. But I think in study after study about ideas and software. Half of them are bad. It's like as hard as we work to try to make software better.

Half the things we do make it worse. And so to kind of accept that and say, OK, we need to validate that each of the things we put in the market actually changes the metrics in a positive way is important. And if we're only releasing once a year, no one's going to do this.

You don't want to work for a year and then find out it was bad. But if we're releasing multiple times a day, and the product manager can just come to sit down by a developer and say, hey, can we change the login button to make it blue and put a space in here? And they put it there, and then 10 minutes later, it's in production. They look at the metrics and more people click it.

Cool! And if fewer people click it, they say, Oh yeah, let's take that back. And if the investment was 15 minutes, it's okay. Right. So I think trying to make these smaller changes, get them to production faster. Gives us more permission to admit when we're wrong.

Harshit Paul (Director of Product Marketing, LambdaTest) - Mm-hmm. Right! That's really profound. I do have a curious question to add on top of it.

So you talked about that there's a certain time window. You talked about 15 minutes for this example, let's say. But it is true for a fact that if you're figuring out something which you pushed three months back, and you're figuring out, oh, this has actually, this could be one of the things that might be a reason for these numbers to decline.

But then you can't really pinpoint that one particular thing because time with time, you've shipped on multiple things you could be seeing a ripple effect of multiple things happening together.

Right! So that time and window of experimentation is pretty crucial. So what do you take as a safe measure? What is the safe time period for you to ensure these results when you push them?

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Yeah, I'm going to be that guy who says it depends. So I think if you're getting dozens or hundreds of users through the system, you're going to get a reasonable level of confidence about whether your software basically works or not, whether you've broken something horribly.

So I think for something like a canary deployment, it should typically take minutes to get some level of confidence that you haven't broken things and roll it out more aggressively. But if you're making a change to an area of the product that 5% of your users use and they only use it occasionally, and you're trying to get that up to 6% and slightly more often, right?

You're doing one of those sorts of product things. I mean, it might take a month to figure out whether your changes are doing that. So I think the more precise you can measure a behavior, the better. Like if you're saying, well, my goal is we're going to make this code change, and then more money will happen. Right? OK.

If that's going to take a month to find out, there's going to be lots of changes that happen. And any of those changes could contribute to or hurt more money. So the game is to get to more leading indicators and more precise questions. Will more people click the Login button? Do more people successfully fill out this form?

Whatever that little step is, can I polish that? And then you're going to have fewer conflicting changes. And the feedback you get is going to be more precise, regardless of how long it takes. So I'm sorry if I dodged the question. But I think that you're.

Harshit Paul (Director of Product Marketing, LambdaTest) - No, you didn't, actually. That makes perfect sense because, at the end of the day, it's not about time. You also need a certain amount of data in hand, right? So as you talked about, some features which are really being used, you can't really put them on a timely basis.

You can't have a time cap on top of them. It's probably based on the volume usage. You need to have that metric in place if I get your point correctly, right? If I don't, by all means, feel free to enlighten me further on this. But this was a really interesting take. Definitely helps, right?

So we talked about multiple things so far, right? I'll be missing out on any general optimization technique that can also help us accelerate CI/CD, you know, speak inefficiency-wise.

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Just some other little ones. Uh, it's so basic, but hardware, um, you know, I, I mentioned at the start, right? When I got my Java builds down to being, you know, five or 10 minutes. Um, and that was a big improvement for us. And, you know, the improvement at that time was to move, uh, from spinning disks to SSD because IO is so important.

Um, today, yeah, it's often like, Oh, we just need to build a machine, and someone kind of casually spins something up in AWS or Azure or what have you. And it's not really well optimized for this job that alternates between being IO-bound and CPU-bound memory bounds sometimes, right?

And so you got to make sure you've got enough headroom for all of that. In our commercial-hosted continuous integration, we're doing this on bare metal, right? This is how we've spun up our build machines. So we take out all of the slowdowns from virtualization layers. I really like that strategy. But get a fast-build machine.

Do these performance-intensive things that are slowing down your development team on some good hardware? It'll make your life better. It's pretty straightforward. Parallelize things. We've got lots of cores in all of these boxes now.

Clearly, we get a good build machine. And because a lot of this does alternate between being IO-bound and CPU-bound, you can run a couple of things at once. And one thing will be bound by one, and another will be flying on another, and you can do things in parallel.

Run your unit tests in parallel with your static AppSec scans and your linting and those sorts of things. Run it in parallel, it'll be faster. And then finally, kind of on the same theme as with testing, like don't do things you don't need to do, caching is really, really important. So many builds I've seen start by downloading the internet, right? That's Maven or NPM or whatever these kinds of libraries are that we pull in.

So like starting by downloading just dozens and dozens of things off the internet, like you did this five minutes ago, like leave those libraries there in a reasonable cache so that you're not going through that. And this can go all the way down to parts of your software that you're compiling, leaving those in place.

People doing C and C++ work get these big object libraries that don't change that often. Leaving those around in intelligent ways and not rebuilding them every time can knock 50-80% off your build times really easily.

So I think there's a lot of these things you can do. Get a build machine that'll go fast, run things in parallel, and cache, right your objects, cache your libraries that are coming in, and please cache Docker layers. All right, if you're doing Docker stuff, every time I've seen these builds go from five minutes back to 20, it's like, oh, we containerized.

And we're doing Docker badly. And like, yeah, do it well. And reuse these layers, because they're not changing very often. And it'll go a lot better, right? you shouldn't have to compile a virtual machine in order to do a Java build or, you know, NPM JavaScript stuff. It's not required anymore and never was.

So optimize that so you don't pay a huge penalty upfront for your easier deployability. Like, I love Docker. It's the right way to do things, but it should not slow down your builds as much as it is for most people.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. And as you also talked about, you know, hardware and making sure that the builds are running in parallel, right? So in case folks are wondering, some folks might already have a hardware setup in-house, or some folks might be using, you know, an emulator simulator or whatever.

But I would just like to quickly highlight that at LambdaTest, you get ready to run hardware, which is on cloud setup. And you can just quickly log in, get into the platform, plug in your scripts, and just define capabilities upon whichever platform, you know, operating system or a browser you would want to run your test upon, and we will shoot up a machine for you instantly and ensuring that you know you can also scale up the infrastructure.

It's difficult to do that in-house if you can procure, say 10 devices today. Procuring another 10 would be a challenge one month down the road, right? And maintaining them from time to time is another challenge.

And that is also a benefit that you would get with LambdaTest, is that you would not worry about hardware maintenance or hardware speed. You would also be able to scale your hardware infrastructure effortlessly by running your tests in parallel using the LambdaTest software platform. Yeah. Go on, Eric!

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Yeah, the cloud infrastructure is so important. And shame on me for not mentioning that earlier. This idea that I can't run my tests because the test environment isn't available right now. What a great bottleneck. Make one. And then the objection is going to be, but then I've got too much infrastructure, and it's going to be expensive. Tear it down automatically when you're done or at night.

There are ways of dealing with this that are really, really successful. And it's APIs everywhere. And so just ruthlessly attack these bottlenecks and say, okay, how do we do this in a different way so that we don't end up waiting for a couple of days for a test environment to be available? Right. It's 2023, right? Just 2024, probably by the time we're up, I think this and sharing this, this webcast, like use the cloud.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. Be smart, Use Cloud, and Use LambdaTest. That would be the take on this. So yeah. Yes. And, you know, we talked about optimization. It comes hand in hand with some challenges as well. We talked about one challenge, say, hardware setup, deciding cloud, or the way to go. There might also be some other common challenges that you might have come across while these phases of optimizations happen. Care to shed some light on that.

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Here we go. We'll integrate it with hard; there we go. Yeah, I think we develop our technical capabilities to do these things separately from the human interactions in our process. And misaligning those things is very dangerous.

So if we get all the technical capabilities to deploy to production, we're going in 10 minutes or 20 or 30 minutes, right? If we can do it quickly, but in order to deploy to production, we still have to wait for the change advisory board to meet every two weeks on Tuesday afternoons, we didn't solve the problem, right?

So we have to then go address the social aspect and the process aspect. Because part of the benefit that we got and part of what we set up for our protection is being able to evaluate this one little change and how it's behaving in production. Well, now I've got everything that a team's done in two weeks. Right!

And it's harder to tease out the impacts of every change. So we lose a lot of those benefits. At the same time, I see people who read these stories and like, oh, okay. Uh, the Dora metric says the more often I deploy a production, the better I am Eric said the feedback's good.

And so, let's deploy tomorrow, and we'll do it every day and all of that. But you haven't built up all the ability to run your tests quickly and get that really rapid feedback and monitor in production, so you know it's safe. And you haven't built up your whole safety net.

So you just deploy to production. You break things. And you break things the next day. And then people say, no, we're never doing this, and you set yourself back years in making progress. So the processes that you have need to be aligned to your actual technical capabilities, and misaligning those in either direction is problematic.

So I think it's the easiest place to get tripped up because, at the end of the day, it's a human exercise. We're all just people, and we're doing our best, but no one's perfect. So having a really good look at the people involved is so important. It's the only thing that matters.

You know, the other place would be, I think. Cost, right? You know, someone's like, Oh, we need, we need the fast build server. And someone's like, well, the slow build server costs, you know, $10 a month. And the fast one costs a hundred dollars a month that's a lot of money, right? Do we really want to do this?

And It's not a good optimization It's a natural optimization because you get a build for something, and if you check a box differently, the build goes down right, like we want to do that as much as we can, but you also want to be in a situation where when a developer makes a change, they get meaningful feedback about that change, like in the time it takes to go get a cup of coffee and come back.

Because if they do, that's the behavior you'll see. They'll take a quick break, come back, and act on it. If it takes much longer than that, then they'll start on some other work. And then they won't act on the feedback they get very quickly. And then you'll have a broken build with other developers, and you have this cascading set of problems that's really expensive but doesn't come in a really clear build from your cloud provider.

So you know, I certainly don't advocate wasting money, but making sure that you're able to get rapid feedback that delivers the right sort of behavior from the engineers on your team is worth spending a little bit more on, right? And depending on the size of your team and all that, it might be worth spending a lot more on.

Take that look, and don't go for that kind of false cost savings of being really cheap about it. There's a difference between being frugal and spending your organization's resources wisely and being cheap.

You just don't spend money, you know, in this way, but you end up spending a lot of labor I didn't stand, and that's not a win Yeah, so those are some things that come to mind for me How about you? Do you have any of your favorite places where this falls down?

Harshit Paul (Director of Product Marketing, LambdaTest) - Let me think about it. I think you covered most of it, and we've been talking about optimization challenges from the get-go. I think I would leave that to you, and I think you summed it up pretty well.

So I have nothing to add, and I'm just learning. I'm making notes side by side in here, down there, and I'm then writing things up left, right. I hope the folks will be doing that, too, while they listen to it. So yeah, interesting take from you. And I guess that sums up about lot of our episodes. I just have one quick question before we wrap this up, right?

And that one is something that gets asked pretty often, especially around this time of the year, which is what are the trends or innovations that you see happening in CI/CD optimization for accelerating feedback? What do you think is in the picture for, say, next year or down the road?

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Sure. Well, I think I'm legally obligated to say AI, right? So we've got to say AI. Realistically, I think AI is super important when we're looking at finding the flaky tests, as you mentioned, when we're looking at figuring out which tests to run and which ones we do not run.

Harshit Paul (Director of Product Marketing, LambdaTest) - We do have too.

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - AI has got a role there we're implementing AI so that when the build pipeline breaks and the deployment pipeline breaks, it says, hey, we think this is the fix. All right, go bring that sort of knowledge in.

So I think AI will be making everything better, and that's great. At the same time, the importance of having a streamlined process will be greater in a world where AI is helping our developers code faster and faster and faster. The time it takes to go from idea to implemented code is shorter.

And so it's more important that we're able to get feedback around that, validation around that, and get it to production more quickly. So I think AI has an interesting role on both sides of the equation. We talked earlier about skipping tests that we don't think to need to be run.

The place where I don't see anybody doing this yet, or at least not widely, is our security tests, our static analysis, and dynamic analysis. And that can be a fairly lengthy process to run through all of those scans. People are obviously reticent and cautious about skipping security scans.

And I get that. But I think we're going to start seeing more of that as we go forward. And then I think we're going to take a better pass, and we'll be testing more and more production as the rate of innovation continues to accelerate. So I think we'll see more feature flags, more things like canary deployments, more synthetic users running in production, that sort of thing.

And similarly, I think we'll see more chaos testing, right? Not a super new concept out in the market, but I think one that's not also terribly common yet, but this idea that we have these big complicated systems, let's normalize parts of those systems getting turned off and making sure that our software continues to run in those circumstances and our engineering teams know how to respond when parts of our system fail.

So we get really good at it because things fail all the time, and we should be good at dealing with that in failing over in really intelligent ways. So I think this idea of chaos testing, where we go through whatever infrastructure fails, more and more and more, I think we're continuing to see an uptake in that as well.

So yeah, those are some of the themes that I'm predicting as we march into the new year. We're gonna go faster, be in production mode, and AI, AI.

Harshit Paul (Director of Product Marketing, LambdaTest) - AI indeed. And you did talk about skipping irrelevant security scans. I think that's going to be a follow-up topic with you someday. Definitely, down the road, people would be interested to know your take in depth about it. And we should plan that, actually. That's an interesting topic, how to do that. But yes, all things in time, all things in good due time. And that brings up the wrap on this episode as of now.

Thank you so much, Eric, for joining us. Thank you so much to everybody who has been listening. Eric, you've been a wonderful guest and energetic, and I love your humor, by the way. I just would like to applaud. You've kept things fun, at the same time, fresh and in detail. So thank you so much for making time out of your busy schedule for this episode.

Eric Minick (Director of Product Marketing for DevOps Solutions, Harness) - Hey, it was so good to be here with you, and looking forward to talking again. Cheers!!

Harshit Paul (Director of Product Marketing, LambdaTest) - Likewise, and for folks who are listening, you can also find the full recording of this on our YouTube channel as well.

Subscribe to LambdaTest in case you haven't done that already, and having said that, we will see you on upcoming episodes of the XP Series, where we will talk about interesting trends and things, all things testing happening from the industry experts like Eric as we did in this episode.

So make sure you hit the subscribe button or stay tuned for more episodes from the LambdaTest XP Series. Until then, have a great time. Bye bye!

Past Talks

Fast and Furious: The Psychology of Web PerformanceFast and Furious: The Psychology of Web Performance

In this webinar, you'll delve into the intricate psychology of web performance. Uncover the significance of prioritizing performance over design, understand why slow websites induce irritation, and examine the profound impact a 10-second response time can have on user satisfaction.

Watch Now ...
How Codemagic Mitigates Challenging Mobile App Testing EnvironmentsHow Codemagic Mitigates Challenging Mobile App Testing Environments

In this webinar, you'll learn the secrets behind how Codemagic, a cloud-based CI/CD platform, helps tackle the challenges faced by mobile app developers and QA engineers and pro tips for healthy workflow infrastructure.

Watch Now ...
Revolutionizing Testing with Test Automation as a Service (TaaS)Revolutionizing Testing with Test Automation as a Service (TaaS)

In this XP Webinar, you'll learn about revolutionizing testing through Test Automation as a Service (TaaS). Discover how TaaS enhances agility, accelerates release cycles, and ensures robust software quality.

Watch Now ...