September 26th, 2025
56 Mins

Bryan Gullette (Speaker)
Senior Manager of Test Engineering, Dassault Systèmes


Kavya (Host)
Director of Product Marketing, LambdaTest

The Full Transcript
Kavya (Director of Product Marketing, LambdaTest) - Awesome. Hi, everyone. Welcome to today's webinar. We'll just wait for a couple of seconds before we start today's session. In the meantime, if you can share which part of the world you are from, probably put it in the chat. We would really appreciate seeing where you folks are joining in from.
Great. Hi, everyone. Welcome to another exciting session of the LambdaTest XP webinar series. Through XP Series, we dive into a world of insights and innovation featuring renowned industry experts and leaders in the testing and quality engineering ecosystem.
I'm Kavya, Director of Product Marketing at LambdaTest, and it's a pleasure to have you all join us today. So today's session is on Dassault Systèmes, a global technology leader known for its collaborative platforms that accelerate product innovation across life sciences, manufacturing, and engineering.
A Dassault Systèmes expert quality assurance team drives software reliability through rigorous test engineering, advanced automation, and continuous quality monitoring. So today, we are hosting Bryan Goulette, Senior Manager of Test Engineering at Dassault Systèmes. Welcome to the webinar, Bryan.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Thank you. Thank you so much for having me.
Kavya (Director of Product Marketing, LambdaTest) - Awesome. So, a quick introduction for Bryan. So, with over 15 years of experience in testing, including a decade of innovations at Dassault Systèmes, and 5 formative years at Paracel, his professional journey is marked by work on high-performing teams that are committed to delivering high-quality, compliant, and groundbreaking technology to the life sciences sector.
So we are today honored to welcome Bryan, whose expertise promises to, you know, really throw a lot of light into today's discussion with real-world strategies and best practices. He'll, of course, also showcase how real-world challenges that he encountered, and what innovative solutions he's sort of worked on. So before we move on to the questions, right, Bryan, why don't you share a bit about your journey in the software testing space?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, sure. So, again, thanks for having me. This is great to be here, and I hope I can live up to it, deliver some of those things that you mentioned, and give everybody some good insights. So, I'm like so many people who kind of fell into software testing, right?
I started out in mechanical engineering, and then I spent some time in Romania helping some people out, doing some mission work there, and then it was time to come back and start the real world, basically, the long-term plan for my life. So, I tried to get a job in engineering, but I couldn't find one.
So I expanded my scope of searching, and first landed in a painting job that didn't pay enough, so I had to search further, and because the work that I'd done in mechanical engineering is a lot about testing, it was in manufacturing then, too and what they'd had me do was run some tests on their process to see where they were having quality problems.
And so, I would devise different mechanical-type tests to just figure out what was wrong with their quality. And so, I think this translated really well into the software world. So, when I went to the engineering, you know, to the interview for the software engineering test role, then they were asking me questions about how to test things, and my brain started going, and I was able to translate from the mechanical world into the software world well enough that they took a risk on me. It was still a risk, right?
But I think that's what happened with so many of us that we got a job in testing, was somebody took a risk on us, and what I love about that is it brings so many different perspectives into the software space. And I love that you can translate so much of life and what we do into the testing world. And so, I think it's just a wonderful industry in that way, but that's kind of how I got into it and started back in PowerXL at that testing job.
Kavya (Director of Product Marketing, LambdaTest) - Very interesting career trajectory, Bryan, and of course, have you pivoted, you know, from so many different career charts into testing? That's pretty impressive. And I think he rightfully said so. People have taken risks on you, which, of course, has led you here. Awesome. So, let's start with the first question, that we have for you, which is, what challenges do teams specifically face in balancing automated testing with deep manual test coverage, and how does this impact defect rates reported by customers?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, thank you. This is a good question, because it gets right into the heart of what I wanted to share. But the basic problems that people have is, first, knowing what the right balance is, and then also how to achieve that balance. So the main idea is that we want to have testing coverage that's thorough and effective, right?
Which should include some smart and useful automation that's done quickly in order to leave time for us to explore manually, to put human eyes on the system to see what the user will see, right? And so, the next, you know, once you understand what the right mix is for you, and we can talk about that a little bit more later, but the next thing is to figure out how to get there.
So being able to achieve the right balance is the next struggle that people have, and for this, I think we can look at where people normally land, and why. And I think oftentimes, it may not be the case for everyone, but oftentimes, we have developers who just want to write code and to get them to write a test is a struggle.
Anything beyond a unit test, forget about it. And so sometimes you get into those situations where developers are, are, at best, writing some unit test codes, unit tests, and what this can mean is that they're not really covering the acceptance criteria. They're not thinking through what the use cases are for the user.
They're not doing business-driven development or test-driven development for acceptance criteria. So, oftentimes, what that means is their code just doesn't do what it should. It's either incomplete or it's incorrect. So what we want to do is we want to cover that gap quickly, right? Have the automation in place, and if you can have the developers give you that good code right up front that's tested, it reduces some cycles between dev and test.
But really, more importantly, it gives the testers a lot more time to be able to explore the app and do the more complicated workflows that the users will actually encounter in the real world when they're trying to do their tasks. And so, that's where this all comes together, is how do we get ourselves to a place where we're covering the automated tests, quickly that do the baseline, and then allow the testers to really pave the way for the users in the real world.
Kavya (Director of Product Marketing, LambdaTest) - Thanks, Bryan. It's pretty interesting because, you know, you've thrown in quite a bit of, insights into it. I think it also highlights how balance isn't about doing more of everything, but also doing the right kind of test, for instance, right? Type of testing, for instance. I think a lot of these, it also depends upon how teams are basically listening to the customers, for instance, because that influences the customer experience again at the end of the day.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, for sure.
Kavya (Director of Product Marketing, LambdaTest) - And, how do QA teams at the source systems decide when to use automation versus manual, you know, manual testing or exploratory testing to catch potential defects before release? Is there, like, a framework that you have in place, which you have made for the team at large?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, pretty much. What we try and do is automate everything that we can, but more just focused on the acceptance criteria. So, if it's something that's automatable. We try and have the developers do that, and actually, that's the way we do it in our team, is we have the developers writing the acceptance criteria tests.
Now, there's some things you don't automate there. When it's difficult to test with automation because of the technology that you're using, that would be one thing. But also, you want to be very cautious about automating things that are complicated workflows. You know, things that string together, because they can be, you know, can lead to bloat in your test suite.
And it can also be brittle, it can not surface the information you need quickly. Like, say, for instance, you have a workflow, and, you know, you go from A to B, and then you go to C, and then you go to D, right? Do you do a test that goes all the way from A through to D? I think that's a less helpful because anything along there could break, and probably will, right? Murphy's Law.
But then, if you have… if you can break that down into, you know, test that goes from A to B, and then another test that goes from B to C, and then from, you know, C to D, because then you can also write a test, what if you can also get to D from A? So then you'll have another test from A to D. So those singular paths you can do, whereas if you were going to try and string all those different workflows together, and then back. You could have, you know, basically an infinite number of tests and combinations that you would have to write.
And you can't just crush that with AI and automation. You have to be a little bit more smart about what the customer needs are. So, the basic rule is let's automate all of the acceptance criteria, the explicit ones, and the assumed ones. And the assumed ones, I just mean that they may not be written explicitly in the requirements, but they're born out of an understanding of the environment that they're in.
So, everybody knows that you have to have this thing there. We don't have to write it every time. So, those things you have built tests for. So, hopefully that helps. It's… I don't know if it's a formula so much as a guideline, but that's where we try and shoot.
Kavya (Director of Product Marketing, LambdaTest) - No, absolutely. I think it sounds like a framework to start off with, when it comes to decision-making. And also it's like a reminder that it's not automation versus manual testing, but rather figuring out which approach is basically working best, in certain situations, and I think what also makes it interesting is how you have mentioned about you know, automate anything that's possible for automation, as long as it's possible, right?
Very interesting. And I think this would be, also a great insight for leaders who are basically trying to figure out where to invest their time and effort into, because most of the, leaders who are working in quality engineering leaders, such as yourself, are working in the, in enterprises and are dealing with large-scale, you know, projects, for instance, they might want to, figure out where to best put their effort into and how. So, yeah, pretty interesting.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, a couple of things you mentioned there.
Kavya (Director of Product Marketing, LambdaTest) - Yeah, do it.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - I want to jump in on, you mentioned about, like, it's not manual or exploratory, and just kind of, like, some of the background, you know, our leadership has been both sides, and very, very rarely in the middle on this issue. So, I've seen some different waves come across our company that I get to speak to a little bit, and we get to.
So I'll introduce that concept a little bit now. Some of the leadership come in and says, we've got to automate everything and then some of the leadership, some of the names that you mentioned before when we were talking, James Bach and, and those co… that company, they like to do the exploratory testing, they, we had leadership who were very much in that camp.
They said, let's really enhance our exploratory testing. And they wouldn't say, no, don't get me wrong, they wouldn't say, don't do automated testing. But really, they wanted to highlight the value of the exploratory testing, and so that's really what the focus was for that leadership phase. And so, we're really seeing both sides back and forth, and then back again. And really what it turns out is that we've had to sort of sort out what the value is of each one, and how to make it just do the right one.
So when we say automate everything, we don't want to spend a lot of time trying to improve or to make an automated test that it really has no business being automated, in the sense that it's going to take too long to actually write it, or it's going to not add value, or it'd be way easier to do manual, and you don't need to do it every time. So there are many reasons why not to automate a test, and I think you just gotta hold to those and understand what those cases are.
Kavya (Director of Product Marketing, LambdaTest) - Oh, very well put, of course. Thank you. So, moving on to the next question, how can integrating manual and automated testing approaches also lead to improved defect detection and prevention in real-world projects?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, this is, I think, been really central to what has driven down, what we have customer-reported defects. And so, having the right mix and use of automation and manual testing can make sure that you're covering your acceptance criteria, first of all. And that you have the more complicated workflows really put to the test. Right?
So, they're different in nature because the acceptance criteria are checking the box, this is what I should do, and all things are rainbows and sunshine, right? And acceptance criteria can certainly cover some of the negative cases as well. But what you have in the real world is something that's far more complicated. It's humans making human mistakes, and then trying to recover from that in the worst possible way that you would never have thought.
And so, all of those types of things are things that you really need to try and work out in exploratory testing. And so there's a whole range of if you're not seeing the value of what exploratory testing is, maybe it's something that you should focus on. We don't have time to go over, like the key points of how to do exploratory testing well, but if you're running out of ideas there, just know that there are many, many more ideas that can help.
And so, giving yourself the time to explore those things is a really important part of doing that. So, as far as, like, determining for each project what you might need to do, because that's important when deciding what the right mix is. Just a note a little bit about the different applications and what their needs might be.
So, if you have an application that has, and you can think for yourself, like what your application does. If it has tightly controlled workflows, then it might require less manual exploratory testing, because you might be able to really, you know, get most of the way there with your automated test, just crush the combinations, and you're good. Because you've controlled the workflows enough that there won't be many places for them to fall through.
Now, if you're on the other hand, if your application is more, freedom for what the users can do and what input they can have, then that might require more exploratory testing. And then there's another way to look at, too, is if your application has a clear silo of what an individual can work with, so they're working on their own personal data.
For example, then you might not need as much exploratory testing. But if you have, like, a lot of objects or data that's interacted with by many users, and they're all interacting in different ways, then you might need more exploratory testing in general for your application.
Understanding what your application is can really help guide you into knowing what the right mix of manual and automated testing should be, and then give you something to aim for. And then having that right mix is really just about, alright, have you covered everything? In an efficient way, so that you can find the bugs before the customers do. Yeah.
Kavya (Director of Product Marketing, LambdaTest) - Thanks, Bryan. Again, a great point, because it also makes me wonder how, you know, this integration piece is often overlooked in certain ways. At certain times, because as you were mentioning earlier, right, people either tend to choose one of the testing approaches, for instance, rather than mixing this, and a lot of times, when both approaches are being used, they sort of working in tandem. I think that's when the teams are able to, you know, uncover the risk, or say, even release it with much more confidence, release… make the releases with much more confidence, it seems.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, for sure.
Kavya (Director of Product Marketing, LambdaTest) - So, moving on to the next question, what metrics or feedback do the source system QA teams use to measure the effectiveness of their testing mix and adjust, as products evolve… projects evolve? Of course, you mentioned briefly about choosing the right, you know, choosing the right approach, but I'm sure that the audience would be curious to know if there are any metrics that you have in place.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah. Yeah, I mentioned, you know, defects that are found in production that we call customer-reported defects, and this is really one of the most important things that we track. We want to find the bugs before the customers find them. Because if we can find them, you know, hopefully early on in the development cycle, it's easier. I mean, I think people are aware that the cost goes down the earlier you find it, the cost to fix it. Once a customer's found it, it's more costly and more problematic.
So, if you find a bug, and whether it's a big bug or a small bug, a very important one, or an insignificant one, at least you have the information to make decisions on that early, to say, okay, this is one that I don't think customers will find. If they do, they'll be able to easily work past it. Right? But regardless of what kind it is, any customer-reported defect, we take back, and we analyze, and we try and determine what we can do to test for things like that, to make sure that we're finding those things first.
So that's a big, important thing that we've built into the process. Another thing that we track is, like, just the full count of existing defects out there. We're trying to make sure that we fix as many as we put out, or ideally more, right? So that our system is consistently getting better. And I kind of mentioned, you know, every time we find a bug as we're developing it whether it's, you know, we find it and it already exists in production, we just found it before the customers do, or we find it as part of our work that we're developing.
We have a triage meeting that meets regularly, and they… they evaluate whether the bug is important to be fixed or not. But all of these different touchpoints with the bugs and looking at them in different levels is really helpful for us to feed that back into our testing loop to see, like, what are the trends, what are the kinds of bugs that are getting through, either in the development process, or into, you know, the customer's awareness, and what can we do to fix that?
Because quality isn't just on the tester's side. Sometimes the quality issues we have are development issues, and sometimes they're product issues. We're not defining stories properly, and we're not building them properly early. We're not finding the issues early enough. So, we want to take a holistic look at that and those are the metrics that we use to really help us determine what kind of trends we need to fix, you know?
Kavya (Director of Product Marketing, LambdaTest) - Thanks, Bryan. Thank you for laying that out so well, because, I like how you have emphasized not just on metrics, but also on the, you know, need for a holistic approach when it comes to testing, and not, you know, of course, it is, as you said, as you rightly said, it's not just the tester's responsibility anymore, but also, I think every, part of the software, everyone who's working in the entire SDLC cycle is responsible for it, at the end of the day.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - As much as I don't like metrics because they're, I don't like them when they're just used as a metric, but really, they should be the start of a conversation, and so that's what I'm trying to highlight here, is that, you know, we use different things, and other companies use different things, and if it's not just the start of a conversation, then it's probably being misused, so yeah.
Kavya (Director of Product Marketing, LambdaTest) - No, great, of course, and I think it's also about, you know, the need to sort of refine, your quality strategy as you go, and also make sure that you get, like, faster feedback, for instance, and for that, I think, metrics are an important, you know, criteria for it, or, like more or less, like, it does give you, more insights into a more matured, I think, quality engineering practice at the end of the day.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, that's right. Yeah, metrics that can feed directly into improvements, that's the right kind of metric. Yeah.
Kavya (Director of Product Marketing, LambdaTest) - Great. So, moving on to the next question, what key challenges and benefits do teams encounter when shifting from manual-heavy testing to an automation-focused strategy?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Well, this is a big question, right? There are lots of challenges that people can have in moving into an automation-focused strategy. And, you know, we talked a lot about having the right balance, right? And so here we're assuming that, like, okay, you've discovered that you've got the wrong balance, you've got too much manual heavy testing, so you're not able to get to those deeper exploratory testing, because all you're doing is the manual exploratory testing or the manual testing of the acceptance criteria, for example that's taking too long.
So, in that case, you want to automate more of the acceptance criteria so you can spend more time and doing the manual exploratory testing. So, in this case, like, first view should be, like. The first challenge, I think, is the mindset. Having the right mindset of the end goal. So the shift shouldn't really be from… away from manual testing towards automation testing. It should be, let's do the right kind of automation testing so that we have time to do the right kind of manual testing.
So I think that's one of the key… the key challenges that people have in organizations when they see the problem, they've got too many manual tests, it's too slow, but they see the goal as automate everything, and so I think they need to first shift that… that mindset. So that being said, you know, when you're gonna move to that you have to know what the effective test suite looks like, and some of the things that we talked about before, like, what are the right kinds of tests to automate?
So that's a good thing to keep in mind. But then also, you definitely do not want to translate your manual tests directly into automated tests, because manual tests are built like user workflows, and they string lots of things together, because a human can absorb that kind of thing and deal with problems as they come, but automated tests are brittle, and they don't have the observability built in like a person walking through it step by step, right? Who knows exactly where the problem was, right?
Because they put in the data four steps back, and now it's wrong. So, this is why you just want to… you want to build your test suite differently than you had written your automated… or your manual tests. So and then another problem is automating too little. Of course, this is already what you've discovered when you're trying to move from too many manual tests to more automated tests, but you want to make sure that you're… you're covering the acceptance criteria, at least, right?
So, if you've done a lot of custom, like, customer user workflows. Then you might be tempted to make those your requirements that you automate automatically test. But it's good to go back and you know, review what the acceptance criteria are for the stories, and, you know, the low-level rules, and make sure that the system is functioning as it's intended for each one of those individual cases, down to, like, is the button disabled when it should be, right? Make some tests for that.
Now, it'll be highlighted appropriately in, like, a longer workflow that you might have had a manual test, but you just want to nail that one down and then move on to the other requirements, for example, and then one more thing I'd say is that a lot of what I'm pushing towards is having these acceptance criteria tests written at the beginning, you know, move from there onto the manual tests. I really like the model where the developers are writing these acceptance criteria tests. Having them have ownership over it has been really valuable for our team.
And a couple of reasons for that are that the developers own that part of the quality process. They have to do, like, the business-driven development for it, so they're handing better code over to the testers for that reason. And it's saving a lot of time for the testers covering things that are just check boxes, so that they can do what humans are better at, and doing human-type things that humans will do in the real world. And so, that's what you want to do, is the test should cover the automation for the application, acceptance criteria.
And then the manual tests should be, you know, people trying to be people, and seeing what people see. And so, if you can set that out as a goal or a stipulation to say, like, okay, we're gonna move, but what we want to do is have developers write these exceptions criteria tests, so that, not that we like, back to the beginning, the mindset of, like, the end goal is not so that we don't do any manual testing, but so that we're doing the kind of manual testing that's effective and that's important for our customers. So yeah, so those are some of the pitfalls and challenges, I think, that’s what to keep in mind.
Kavya (Director of Product Marketing, LambdaTest) - No, excellent perspective, Bryan. I think it also shows that, again, what you highlighted at the beginning of the conversation, right? While moving towards automation, it does bring its own challenges with it, but also figuring out the long-term benefits, when it comes to scalability, I think, and repeatability and so on. I think, one thing that definitely also, that you highlighted was, the shifting mindset, you know, the aspect of shifting mindset, and also about how effective a testing suite should look at… look like at the end of the day. So, you know, just out of curiosity, I also wanted to figure out, you know, personally, you must have worked on hundreds of projects at the store, right? Is there anything specific, any specific.
Kavya: aspect that stands out, or, like, a learning that you had when you specifically, faced a challenge while moving from a manual heavy testing to an automation-focused strategy? Was there a very specific specific challenge that you encountered.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Oh, interesting. So, I have to go back to PARXL, really, more, to leverage those kinds of examples, because when I came to Dassault Systèmes, which, you know, I work for the Metidata brand at Dassault Systèmes. They had this process in place already, and so it was… it was wonderful to see the switch, but, because we were doing a lot of manual testing at, PowerXL. And so I guess we were trying to do some changeover from the manual testing to the automated testing.
And I think that's where I really learned the challenge there, was in learning how to write the tests differently than the way we had written them for manual testing. So it was so easy to just say, like, we want to convert this test over to be an automated test. And so, we tried that, and we failed miserably because it was too brittle. It just wouldn't run all the way through, we could never get it to pass, and so our whole test suite failed because it was only one automated test, and it wouldn't pass because there were too many complicated things in it, right?
So we learned quickly that we had to break that down into smaller parts, then so that's one of the main things that we learned. But then, I think trying to think of other lessons that I would have learned in that… in that challenge there. I think when you do the manual testing. I think you have a good understanding of what proper testing, maybe not, I don't know. There were certain requirements that you had to do when doing a manual test. Like, if the step was wrong, you'd have to properly market and initial any changes for minor changes, larger changes you'd have to handle. So all those kinds of process things that were ingrained at the manual level. And the reason why you had to do those things, for… you know, I'm in a very regulated environment, and so all those things had to be very specifically controlled.
But even if it's not in a regulated environment. The reasons why you're doing the test are a little bit more applicable when you're writing the manual test than the automated test. I think that the challenge when moving to automated is, oh, it's so fast, you can just automate it and you get into the cases where you're, I guess you can have test bloat, because you don't understand why you're writing the test. Is it really an important test? Does it add value, right?
And so then you have to get into, like, equivalence partitioning. Like, if I do this test with the number 1, and I do the test with the number 2, have I learned anything important or are those basically the same code path and the same, the same risk? And so I think that's another challenge that you have when moving to a more automated setup, is that, you can lose sight of the fact that it is actually important to only be running automated tests that have value.
So, that's especially important now as we're getting into the AI world, and how code is building code, and so, I can see… I can see mountains and mountains of tests that don't have any value slowing down test suites and backing up, reporting frameworks and all that kind of thing. So yeah, just to come back to one of those phases where, you know, the leadership in our company said we want to automate everything, and I landed on a team in Dassault Systemes that had automated everything pretty well, and so we were wondering, well, what else do they want us to automate?
Because at the time, we were already dealing with the fact that our test suite the report output was too large, and it was breaking other systems where we had to place the report. And so we're dealing with those kinds of… we're already too big, why would we want to make it bigger?
How do we start to trim this down so we're running it more efficiently? And it's taking 2 hours to run the whole test suite, but that's too long, we want faster, you know, so, initially, it might seem like, there's a lot of room to grow. And the test suite is very short at first, but if your project's gonna last for any length of time and grow in complexity and size, then you have to be careful with the bloat, make sure that you're handling the future, you will thank you when you consider test bloat, yeah.
Kavya (Director of Product Marketing, LambdaTest) - Thanks so much, Bryan, and of course, great insights. I think it's always interesting thing when we go back into our professional journey and figure out what we learned, and I think what you shared was definitely very insightful. You know, one thing that I've also often read about the source system is that, you know, for instance, when it comes to a couple of advisory and analyst firm reports, right, they've always acknowledged the product leadership, the industry strength, as well as, of course, you know, customer experience that Dassault is able to sort of provide, you know, as an organization.
So it's interesting to know that, you know, the frameworks that you mentioned, the kind of, you know, guidelines that they had in place, the kind of strategies that the quality assurance teams at Dassault has put in, also probably contributes to what we are reading, right? I mean outside of the organization, when we read about the source system, I think a lot of what you said also contributes to the strength the platform has been, the organization has been displaying.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah. Yeah, I'd like to think so, you know, that we're doing some things right, and it's having a positive effect. So, for sure, we've seen that in the customer-reported defects. So, I can't take too much credit for this, because it's a full team effort, but over the last 3 years, I put some metrics together. Over the last 3 years, each of the last 3 years, our customer-reported defects has been going down.
And so, I started wondering, well, why is this happening? And so, I think it's… it's because we put this… this mix together, where we have the acceptance criteria tested, and we've had focus on making sure we have enough automated tests, and we've had leadership that's come and help us focus on how to do exploratory testing really well. And so, you know, when you put those two together in the right way, then I think that's what you see, is you see customer-reported defects starting to go down. And so that's why I wanted to share.
Kavya (Director of Product Marketing, LambdaTest) - Very impressive. Great job, of course. So, I think one question that has come up is also, you know, how are, how can organizations at scale like the software, for instance, embrace AI? Is there anything specific that you might want to share, with the audience on that particular.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, so I've seen other companies maybe adopt AI faster than we have. Certainly, one of the main considerations is security. Like, you're gonna have to get the security right. Of course, companies don't want to give away their code to AI companies, so you have to make sure that's one of the things that took us a little while, to make sure that that was safe. And then, we've had some attempts, I think, like most people have, at trying to see where AI is going to be valuable.
And each different project is going to have their own specifics, and it's really related to what kind of data they can make available. And we're oftentimes using the same kind of data, you know, similar bug reporting systems, and similar code-based systems, and, you know, similar IDEs, and things like that.
So, it's all pretty common. The data's out there, and we're starting to get that available to AI when it's secure and it's just a process of learning, you know, what this new relationship looks like, because we have a kind of a relationship with AI, and so we have to try it, and we have to experiment with it to see where it's actually valuable.
And, you know, if people haven't tried it, you know, from what I've seen, it's kind of spotty results, and you have to be careful with what you prompt it with, and so there's this whole kind of prompt engineering concept that's a real thing. You can ask it for things and get bad information.
And then you ask it slightly different, and it seems like it's got the best information in the world. Which doesn't give me a lot of confidence when I think, you know, I'm gonna ask it the next thing, who knows what I'm gonna get? Is it a good question or not? So, what I… in all of these things, I'm trying to go back to say, like, there's certainly that principle of having the human in the loop, and that's just been one that's really hit home for a lot of people.
I think and what it means is that, I compare the output of AI to what the human would have done the human is the standard. I think a lot of people are comparing AI to this AI to that model, to seeing what seems like a pretty good result, but really, stack that up against what you would have actually done as a human, and see where it gets you.
And I think that's where if you can consistently show that it's adding value over what a human would have done, then I think that's good. Another thing I'd say is, like, as you start to scale in your company, there's a lot of hype around it, and I, you know, maybe hype is even a loaded word.
But there's some real enthusiasm and excitement about what it can do, and I think the potential there is real. But I think a lot of times, unrealistic expectations are being formed by initial results, you know, like, how fast it set up a POC, and how fast it could, you know, write certain ideas down, or something like that. Because the real… the real benefit isn't that it can output a lot of text quickly, it's the thought that's behind it. Did it make you think something different?
Because a person is still gonna have to go, you know, the human in the loop is still gonna have to go through and do the review, the very careful review. And even, I would say, a more careful review than any other review we've ever done before because the review is gonna say, you know, it's gonna be tempting for us to think that AI is great and it gives good answers, it'll output tons and tons of information, we'll think it's complete.
And do we really have the discipline to go through and review it carefully, not just for what it's there, and is it actually accurate, but what is it missing still? And so, we have to really engage our review muscles a lot more than we ever did before. When you had somebody who wrote something who was a human, and they give it to you for review, it's a lot safer, I think, than an AI, because a humans will have very predictable human errors that we've learned to understand.
But AI is an error pattern that we haven't really adapted to yet, and so that's why our reviews have to be extra strong and extra diligent. So, that's my current take on AI and using it, is just get ready to review in a way that you've never had to review before.
Kavya (Director of Product Marketing, LambdaTest) - Yeah, super interesting. Thanks, Ryan. We have one more question, which is, in your experience, what are the key indicators that a team has the wrong mix of automated versus manual tests?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Okay, the wrong mix, I think you can look at some of the metrics, like are customers finding too many defects? Right? Are they finding important ones that cause us problems we have to do hotfixes for or something? Right? So, quality issues can be certainly a sign that you might need to change something in how the way that you're developing, but just like any metric, it should be the start of the conversation.
It might point to the fact that you have not enough time to do deeper exploratory testing. It might point to the fact that your acceptance criteria tests aren't automated, and you're missing some of those automated tests, right? So, depending on what kind of errors you're finding, or where in the process that might be breaking down, right? Sometimes people will say, like, it's not doing this.
And then you realize you never actually built it to do that, and so it's kind of a requirements problem, and so you need to figure out what's in your process that's missing a well thought out plan for your software, so yeah, use that customer-reported defects, and any other quality issues as a start of a conversation to start asking the whys, and it might point you to the test mix. So, yeah, that's what I would say to go to, if you're trying to figure out whether we have the wrong test mix, yeah.
Kavya (Director of Product Marketing, LambdaTest) - Thank you, Bryan. I hope that answers the question for the audience. So we're on to the last question of the day, which is, what tips can help, QA teams effectively communicate the rational behind their testing strategy and proposed changes to project leadership?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, so, this is a challenge that we haven't necessarily addressed in the previous, like, challenges of changing over things. But I think, primarily, you want to know, I guess, first of all, you want to know why you have what you have. So just understanding the excuses, the reasons, whatever you call it, the history behind it, to understand, you know, what situation you're in. And then clearly define what the end state is that you want, right?
That's what's gonna be, you know, top leadership, you know, strategy is what do you want to get to? What value does that give us, right? The end state has to come along with the value for them, right? In this case, it's very easy. We're trying to reduce customer-reported defects by doing a proper test mix, and we can show how it does that.
And we can show why it's important to not have customer-reported defects. If you have a project that's had, you know, major scrambles, or you've had to recover from customer problems, like, this would be a great case to say, like, hey, let's look back at this and see, you know, how's our test mix look, and what do we want it to look like?
So, clearly define that end state, and then you have to know what the gaps are. You know, you've looked at how you got to where you are, now you want to look at where you want to go, there's a gap there. And then, you have to know what the priority is in the plan for improving it. What are the specific steps that you hope to take in order to close those gaps to get where you want to go. That's a really good communication plan at an executive level is they want to know the bottom line, and they want to know how to get there.
And so, give them those kinds of tools. Hopefully, you know, what we've talked about through here will give you a better idea of what that end state really is, only you know where you've been and where you're at. But then, when you know what the end state is, or how to find it, then you can determine what the gaps might be. And some of the gaps might be test mix, like, you're in a state, right, of lower customer issue,s some of that gap might be the test mix.
If you want to take that individually, that's great. You might have other problems that you need to bring to your test leadership, or your leadership, and say, like, yeah, well, test mix is a part of it, but we're having customer issues because of other things, too. That'll come out from some of those metrics, just paying attention, listening to what customers say.
And then, yeah, figuring out how to get there, that's the hard work. If it were easy to get there, somebody would have done it already, probably, so yeah, making the plan and having it step-by-step, so that they can see, you know, you want leadership to be able to have confidence that if they invest in this, that it'll actually get to where they need it to go. And so, having a well-thought-out plan for that, is a good strategy for communicating it.
Kavya (Director of Product Marketing, LambdaTest) - That's an insightful response, Bryan. I think it's also about tying back to the business impact along with the testing strategy, because I think the leadership also is rightly said, right, I mean, they also want to see the why, and also not just the what, but also the why. And I think, of course, the clear, game changer would, of course, be communication between the QA teams and the leadership, and the business, and multiple different stakeholders who are basically involved in it, yeah, I did promise that this would be the last question, but I just saw one more question from the audience. Is it okay if I go ahead with it?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Of course, yeah.
Kavya (Director of Product Marketing, LambdaTest) - Great. So, with the rise of GenAI and agent-driven testing, do you see this changing the balance between automated and exploratory testing?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - So, it's just interesting. I think it will eventually. I don't know that it's quite there yet, at least what I've seen so far, so currently, I went to a little conference a couple weeks ago about, you know, agent-driven testing, and to the point that it is right now, it's a little bit more of, generating the code that's needed to perform a test, an automated test.
And I think this is one of the things that can be a tool for testers, even in this space of getting the right test mix. So if you're in a company situation where the developers aren't writing the tests, well, sometimes testers aren't the best at writing code, right? And so it might take us a long time to struggle through getting the acceptance criteria tests out of the way. To give us time to do the exploratory testing that will really be needed in order to pave the way for the customer.
And so, this can be a game changer in the sense that, like, you know, whether it be developers or testers. Let's get those acceptance criteria working and out of the way. Like I said, we're gonna have to do the proper reviews, did they get all of them, you know, all that kind of stuff, but because acceptance criteria are a little bit more in the camp of, let's just check the box, AI is getting pretty good at checking the box.
If you've written out the acceptance criteria clearly, in an organized way, it can do a pretty decent job at checking the boxes and writing those. So, use it, leverage it in order to be able to explore the application more. That would be, you know, I think that would be a huge impact.
Kavya (Director of Product Marketing, LambdaTest) - Thanks so much for sharing that.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, I wonder, oh, yeah, there's another question, but I want to put in one last little note on this question. Because what can be behind this is sometimes some insecurity about, testing jobs. And I think that comes back to the mindset of, like, we are we doing automation so that we don't do manual testing anymore? And I think if the mindset is, no, we're doing automated testing because we want to have more time to do better manual testing then I don't think our jobs are at stake as testers, until we have AI using the systems instead of users, well, then AI can test it for themselves, right?
So there’s one other phase of generative AI that I think could come down later, and that is… I haven't seen it yet, but that's where you release GenAI as a tester, as a manual tester. To, like, understand what a system is, and try and perform what it thinks that system is offering. I haven't quite seen that yet. That would be an interesting next development. And that's where we still, as testers, have to understand, like, does AI really know the context of that, like the tester does?
The industry, what the user's actually trying to accomplish? When it can do that, then I think it's a more… it's a more valuable tool as a generative AI. It may be possible that it could be leveraged for exploratory testing at some point haven't seen that yet, though.
Kavya (Director of Product Marketing, LambdaTest) - Okay, thank you for sharing that. I think we have one more question, which is, in your team, who owns the decision on the mix of manual versus automated tests? Are they QA engineers, developers, or leadership? And how do you resolve the conflicts in these priorities?
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, it's usually when we're. It's a conversation, and we have a planning session, or a grooming session, where we talk about, you know, what are the automated tests that we're going to have as part of the acceptance criteria.
And the testers have their input in that conversation, too, to say, I think we should try this case, I'd like to automate this case. And then developers can say, well, that's not going to be a very automatable test, or, you know, that's a little bit beyond the scope of what we would want in an ongoing regression test suite, for example. And so, it's usually just out of the, you know, we come to consensus in those meetings about what makes the most sense it.
I don't know that it ever has come down to, like, you know, the person, you know, I think in our company, it would be the test manager who ultimately owns, like, what should be automated versus written as a manual test. They have in our company, at least, it's the responsibility of the testers, because of the regulatory requirements that we have, there are certain things that we get to say, like, no, we need this automated for now, it has to be a part of it or we can get away with doing this as a manual test for now.
But I think if the team is in alignment with what they're trying to do, the end goal. Then, a lot of that stuff should kind of work out together, right? If developers don't want to write tests, and they'd rather just have somebody else deal with it, which means a manual test, or they have to write it themselves.
Then it's not really a collaborative environment, right? So, if it's just people trying to get out of work, then you need to agree on what it is that you're trying to accomplish, and have some standards about how you're going to work together, so it's probably worth setting up some conversations to talk about what things in general we automate versus what things we test manually.
Kavya (Director of Product Marketing, LambdaTest) - Thank you so much, Bryan, and that was a very interesting response, and I'm sure that our audience would be able to take away a lot of insights today. As we wrap the conversation, just wanted to check if you have any parting, you know, words for the audience.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yeah, I would say I didn't have anything prepared as, like, a closing remarks on that, but, I think, you know, hopefully this has been a helpful presentation in order to take a look at, you know, what your test mix is. I don't know that that's something we necessarily focus on very much, but, you know, understanding what the goals are of a proper test mix hopefully is helpful, and then you can get there. So, keep at it, and happy testing.
Kavya (Director of Product Marketing, LambdaTest) - Thanks, Bryan. So, I'd like to thank Bryan for sharing his valuable expertise and insights with us, and of course, thank you to the audience for being a part of this session. As always, I think, you know, we want to keep these conversations alive, so we would appreciate if you stay tuned for more such episodes.
And, you know, we make sure that we're bringing in more leaders for reshaping the world of QA and technology, like Bryan. And, of course, you can reach out to Bryan over LinkedIn, as well, in case you have any other questions or want more insights. Thank you so much, everyone. Have a great day. See you in the next session. Bye. Thanks once again, Bryan.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Yep, thank you. Thanks, everyone.
Kavya (Director of Product Marketing, LambdaTest) - Thank you.
Bryan Gullette (Senior Manager of Test Engineering, Dassault Systèmes) - Bye.
Speaker
Bryan Gullette
Senior Manager of Test Engineering, Dassault Systèmes
Bryan Gullette is a Senior Manager of Test Engineering at Dassault Systèmes, working for the Medidata brand. Bryan has over 15 years of experience testing software in the clinical trials software space, 10 years with Dassault, and previously another 5 years with PAREXEL. Bryan has been fortunate to have worked on highly functional teams that have been able to support the delivery of high-quality, amazing new technology into the sector. Outside of work, Bryan enjoys spending time with his family and being involved at his church in Connecticut.

Host
Kavya
Director of Product Marketing, LambdaTest
With over 10 years of marketing experience, Kavya is the Director of Product Marketing at LambdaTest. In her role, she leads various aspects, including product marketing, DevRel marketing, partnerships, GTM activities, field marketing, and branding. Prior to LambdaTest, Kavya played a key role at Internshala, a startup in Edtech and HRtech, where she managed media, PR, social media, content, and marketing across different verticals. Passionate about startups, technology, education, and social impact, Kavya excels in creating and executing marketing strategies that foster growth, engagement, and awareness.
