XP Series Webinar

Automated Test Execution Reporting

In this XP Series Episode, you'll unlock the power of automated test execution reporting. Elevate your testing efficiency with insights and analytics, transforming your approach to software quality.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Scott Paskewitz

Scott Paskewitz

Automation Architect, Lincoln Financial Group

WAYS TO LISTEN
applepodcastrecordingspotifyamazonmusicamazonmusic
Scott Paskewitz

Scott Paskewitz

Automation Architect, Lincoln Financial Group

An accomplished Software Engineer who can architect, design, implement, manage, and inspire solutions for Fortune 500 companies. Specializing in Quality Assurance Test Automation Frameworks, architecture, and DevOps continuous delivery. Experienced in various SDLC methodologies, including requirements definition, prototyping, proof of concept, design, system integration, testing, implementation, and maintenance. Enthusiastic and innovative around test automation.

HARSHIT PAUL

Harshit Paul

Director of Product Marketing, LambdaTest

Harshit Paul serves as the Director of Product Marketing at LambdaTest, plays a pivotal role in shaping and communicating the value proposition of LambdaTest's innovative testing solutions. Harshit's leadership in product marketing ensures that LambdaTest remains at the forefront of the ever-evolving landscape of software testing, providing solutions that streamline and elevate the testing experience for the global tech community.

The full transcript

Harshit Paul (Director of Product Marketing, LambdaTest) - Hello, everyone, and welcome to another exciting session of LambdaTest XP Series. Through XP Series, we dive into the world of insights and innovation, featuring renowned testing experts and business leaders in the QA ecosystem. I'm Harshit Paul, Director of Product Marketing at LambdaTest, and I will be the host for this session.

Joining me, is Scott Paskewitz, who is an experienced QA leader specializing in various test automation frameworks, architecture, and DevOps continuous delivery. With rich experience in diverse SDLC methodologies, he brings expertise in requirements, definition, prototyping, design, testing, and much more.

At Lincoln Financial Group, his enthusiasm for innovation, test automation, and dedication to mentoring teams drive organizational growth and success. Scott, thank you so much for joining us. It's a pleasure to host you.

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - Yeah, thank you for hosting me, Paul. I appreciate it. Looking forward to today's presentation.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, definitely. I'm sure our viewers and all of us, including me as well, are looking forward to this talk. And in today's show, we'll be unraveling the magic behind “Automated Test Execution Reporting”.

Picture a reporting solution that does not play favorites — that is compatible with any test automation frameworks regardless of your toolset, code base, and window. And the best part, Scott is going to show us how to do it seamlessly with new toolsets without worrying about the data migration issues and anything that you can take care of while switching to new tool sets. So let's dive into today's show. Scott, the stage is all yours.

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - Okay, great. Let me go ahead, and I'm going to share my screen. Thanks, Paul as he mentioned, we're gonna be talking today about automated test execution reporting. Before we begin, I figured I'd do a quick introduction.

So my name is Scott Paskewitz, and I'm an Automation Architect. I'm based in Omaha, Nebraska, and I've been in the field for over 25 years. I actually started my career in test automation, being a QE engineer working with all forms of automation throughout my career.

I began work with TestRunner, a tool very similar to WinRunner, if you're familiar with that, and went through all the various tools over the years, Compuware products, QA Run, TestPartner, went up through a lot of the HP tools such as QTP, UFT, on into Selenium like many of you are probably using. But interestingly enough, when I started my career, we were using a tool called TestRunner and it was more of a master client scenario.

So you actually had a physical hardware connection that connected the first PC, which was the master, to the client PC that executed those commands. So think of it as like a hardware version of a web driver back in the day. I'm aging myself here, but we actually used to have a huge setup, very costly.

We're talking hundreds of thousands of dollars for this interconnection of hardware and software. But all of these things have been changing over time. That was back when I was working in telecom. I worked at Hewlett Packard, also Mercury Interactive.

And as Paul mentioned, I'm now at Lincoln Financial. So my degree is in Electronics and Telecom Engineering, but I've been specializing in all forms of test automation throughout my career.

So tested everything that you're probably familiar with, web apps, mobile apps, going back to desktops, Windows applications, APIs, emulator-based testing, databases, image comparisons, you name it. So I've got some of my information on the screen here. Feel free to follow me on LinkedIn. I'll have this information at the end as well. So let's jump right into it.

So the agenda today, we're going to be talking about reporting your test execution results. So, you know, you probably have some form of reporting already set up today. So why are we talking about this again if you've already got something?

So I'm going to show you, at least for our organization, our problem statement and then get into some of our layout and design schemas how we went about going and building this system, and why it's beneficial.

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - So my question to everybody on the call is, how are you reporting your test execution results today? Are you using maybe an enterprise test repository? And when I throw out words like enterprise, I'm thinking more about like a paid solution.

So if you think about a quality center or an ALM solution where you have an entire management system, you have maybe a test repository, defects management, and requirements management, what sort of testing are you doing or reporting are you using out of those particular tools? There's a number of tools on the market, you know, test rail or X-ray, Q-test, the list goes on and on.

And I mention this because let's just look at QCALM. I did work at Cula Packard and we were using Mercury Interactive products for quite some time. And it was very, very convenient in that we had a merger acquisition where we acquired Mercury Interactive.

So then I was able actually to get in there, work with the Mercury staff, whether that's the support staff, the developers, I've been in the Mountain De-location, worked with them throughout their process and really got a good insight of the tool and the process.

So in that, things have been changing over the years. So we went from Mercury to Hewlett Packard, then they switched over to Micro Focus, and we're still using Quality Center in my company today. Ironically enough, we went to renew the contract, and I just realized here in the last year, they've switched over to Open Text, is now owns the company.

So constant change. The product hasn't changed, but the vendor has, but that's the one constant in our industry, right, is change. I've mentioned some of the flows from the HP products over through Selenium. Now we're up and running on Selenium, but up on the horizon, we're doing a lot of TypeScript stuff, so we're bringing in Playwright, and AI testing is all on the horizon.

So things are constantly changing, whether it's the repositories we're using, whether it's the tools or the framework. So there's constant change. So how do we scale that? How do we handle that and minimize the disruption to our teams? We went about that by using some external test execution reporting tools. So that's what I want to get into, and kind of discuss today.

Some other options that you're probably using if you have an open source framework; you probably are leveraging either a coded or a codeless solution. So how does that look? Does it have some reporting that you're using today? The different frameworks have different add-ins you can get. TestNG will obviously be something everyone's probably familiar with Qmetry.

Qmetry is a great add-in. It has a lot of step data, especially if you're using Gherkin or Cucumber. It has the ability to show those individualized test steps and really gives you just a ton of data to view there. How are you accessing or using that data? And the same with like an execution CI/CD framework, right?

We use GitLab, but depending on what you use, there's a ton of information logged out to the consoles on those. So how do you use that and how do you incorporate all of this information into your reporting system where you might have all these different components in your framework, in your architecture? How do you bring those all together and really access them easily?

So here at Lincoln, what we've done, and I've used similar tools throughout my career, is we have a web-based reporting system with a centralized database on the back end that we've leveraged to help bring everything in the house and everything to a single centralized repository.

So the problem statement, going back, you know, multiple tools, especially if you're using vendor tools, our biggest thing is there's a huge drive going open source with everything, right? Anytime there's a cost, management's always looking at the best way we can reduce this cost or eliminate the cost, right?

So high-cost vendor-based tools like Quality Center and different ones, granted they do have a lot of reporting solutions inbuilt into them. But are you paying for these just for those reporting solutions? Are there other benefits? What happens when those change over time? So reporting may lack details as well, right? So are you getting everything out of that Quality Center report? Are you getting everything out of Qmetry or your TestNG reports?

How do you summarize that data, right? So there are different pieces and different reports. I'm sure this piece is great and another piece from another report, but how do we bring those all together into a single view of data? The other big problem that we are running into is access, right?

So most people on your QE teams tend to have access to all your systems probably. So whether that's your quality center or a repository or GitLab, probably all your people executing tests will have access to your repositories there, your Qmetry reports, where are those even located?

So the problem becomes maybe when you start to share these reports with other parties in the organization, right? So the project managers want to always be on top of things, know the status, know are the tests trending in the right direction, and then they'll share those reports as well. They may share them with their management or with business users and UAT users, and you've got all these different parties that want to see their status.

So traditionally, with a lot of these other systems, while it's easy enough to access per se, it still requires you to stop, get them to fill out a ticket, wait for that access, and get access to it. And a lot of times, if someone just wants to look at a report, they're either not going to go to that extent to get access, or they might just look at a screenshot whose data becomes out of date in a matter of hours or the next day.

So our approach, how do you handle that, right? We've designed a reporting solution that is role-based. So for our basic reports, we wanted something where everybody had access to view that within the organization. So another challenge that we ran into was reporting on defects. So how do you view all your defects in a centralized area?

So if your organization is anything like ours, we're using multiple defects systems. So I've outlined here, that we have JIRA, and we have version one for our agile management. We still have a quality center defect management repository which we're using, so that's three systems. We're also using vendor JIRA systems. So we have external vendors for some tools.

When an issue is reported or found, we have to open up a bug in a vendor-reported system. Then the challenge becomes it just keeps getting worse the more we look at it; how do you report on the vendor system right? So a lot of times, traditionally, we would open up yet another duplicate copy of that on an internal system so that we could track the status of that external vendor defects.

So how do you bring all those together and be able to track them easily? So what we wanted to do was put something together that would be a low-cost solution to this and give us some great reporting options. So what we did is we built a low cost reporting solution with a centralized database. So we're taking all of our data at runtime and we're just logging it to a database.

Ultimately then we put a web-based reporting solution on that database, and we're able to create our own customized reports. So one of the advantages of doing this, besides being able to report exactly how we want it to report we have compatibility with pretty much every test automation framework out there, right?

So as these tools change over time, as we're going, say, from UFT to Selenium to Playwright to AI, we have all of our data. It's centralized in this database that we've created. And as things change, the underlying data we're reporting on doesn't have to change.

So we are able to handle compatibility even different coding languages, and really bring that data out of those different tools and make it very flexible. And how do we go about handling this, right? So the data has to get into the database because it's not inbuilt into all these systems. So the way we've went about it is through, initially through just an SQL update. So you just execute a command at runtime and pass in your pass or fail information along with what other information you wanna log.

Later on, so we changed that into a stored procedure. So we had a centralized point of maintenance. We have one stored procedure, we can just call that at the end of a test script assuming you have somewhat of a framework built with your solution, you just have it call at the end of a test script make a call to your pass or fail logic and just add that call to the database in there other languages that didn't handle that as easily.

We just had a simple API call that could log those results. So here is an example of the database schema which we put together. So we have a relational database and what we're using is actually Microsoft SQL Server Express. So we're using the free version of SQL Server. Again, we were looking at a no-cost solution. So why did we go with Express? It could have been really any database, right?

If you're an Oracle shop, Oracle, SQL, you could use MySQL or MongoDB, really any solution that you're familiar with or have a staff that's familiar with. So we use SQL Server Express, really the only thing we were missing in the Express version was the ability to do backups, right? You don't want to have all your data in this database and not be able to get to it or back it up if something does ever happen. So those abilities are still inbuilt.

So we just wrote a little BAT script, and we do nightly backups and backup that data and we've been running with it now with this same database for 14 years at Lincoln. So we've got about one and a half gigabytes of data is what those daily backups would have. We've never had to do a data cleanse or data archiving. It's just been a very manageable amount of data based on how you put together your database schema.

So if we look at the database layout here that I have on screen, this is very much a relational database. Again, if you're all familiar with Quality Center, it's very similar to the layout that you would find in Quality Center. So at the bottom of this, we have two main tables. We have table results, and we have table run. And this is going back to the concept of test iterations.

So this would be where you have a test, and maybe you wanna run it multiple times with different datasets each data set would be considered an iteration. So in the parent table, you would log things like the test ID, the test name, and maybe the machine it was executed on, who executed it.

But then, with each iteration, you can get down and log the iteration number, the pass or the fail result, along with the duration of the test if you had an error message. So if the test failed, we capture the snippet of that first error message so you can see what's going on there, along with a path back to that image. So we take a screenshot of that first failure.

Now it's only the first failure because we don't wanna just proliferate the system with lots of images, but we do take that first image. This is just a path that we're using in the database, so it doesn't take much resource, but we do store that image for easy access.

You can see the way we've set it up; we're also doing things like run counts, so we can figure out how many times this test takes to execute before it runs; very interesting data from a historical perspective. And then, what we've got is just all the different fields here that might be of interest to your particular organization. So things like the browser, so we're logging like it's Chrome version 118 or whatever the version is, we're logging all that information.

The version of WebDriver, the date, the test type, whether it's an automated test, right? And if it's automated, is it Selenium? Is it Jmeter? Is it a manual test? Whatever you're doing can be logged here. And you can see there are a couple of fields we've got called comment and defect.

That's what I mentioned before, how do you bring all those defects into one centralized system? We have a field where you can go back and associate or tie those defects to individual test case executions. So now you have a way of tracking those defects back to a test case.

And this list goes on and on. I didn't show the entire table here, but other information you might want to think about if you're designing this is what you want to see and what's good historical data to have that's associated with the test. So maybe things like the operating system that it's executed on.

The mobile device, are you testing on mobile? If so, what device? And then the repository, right? You probably have your test stored in GitLab or a repository. We're recording that Git URL, the project name, the branch, the job number, build information, and all of that information is stored in there as well.

If you are using a Gherkin-based test and have those test steps, you may wanna save something like the feature file or the execution tags, or scenario tags. So all this can be very helpful in accessing that data later if you can give as much information as you think you might ever want to query.

And again, I mentioned we can call this either through a SQL update, a stored procedure, an API call, or whatever is easiest for you. I've put a little snippet on the screen here of a call to the stored procedure that we're using. You can see we call this log results 3-5 and we're simply passing in a bunch of parameters here where we specify the test name and all this information that we want to specify from these tables.

So these are a lot of these fields are optional. We have a few that are mandatory, you know, obviously, the test name, the result, the duration, some of those fields, but a lot of the other ones are all optional based on if the test actually has that information available. You know, not all tests have WebDriver, obviously. The performance tests that are run on JMeter aren't gonna probably have a WebDriver.

Or if it's a manual test, there's not gonna be a WebDriver. The other thing, if you look at this database, we do have a few more tables that I didn't discuss. So the main data is down in the table results, table run, but we do have the concept as in Quality Center where you have a project, and then you have a folder, and then you have a test set.

So this gives us the ability to create a logical grouping of data. So over time, you might have a project that you're working on, and then with each release, we would typically create a folder. It would be something like the release name, and then you can group different chunks of data into test sets. So it gives you a nice way to break out your reporting by chunks of data, and that leads to a very organized test report system.

So what's next? So now you built this database, you're collecting all this data with every test execution you run. If you're running a lot of CI/CD pipelines, you're just probably amassing a ton of data. So how do you get to it? How do you view it? So the next piece in this puzzle would be to display that data. So a couple of choices.

If you want to go all out and really have an interactive user interface, that's the direction we wanted to go from the start because we're not only reporting in our instance; we have a lot of accessory tools that are available to our teams. But the nice thing is you can do this at any pace you want. You know, obviously, it started out just for reporting, but it grew and grew over time. Everybody's in there using it every day.

So it's a very interactive report, and we've built it in our scenario with Visual Studio using vb.net. But whatever works in your organization, whatever you're familiar with, React, Node.js is getting popular. I know a bunch of people on our team really want even to redesign it in the newer languages. But if you want to get started with something simple, just look at the reporting to begin with. I'm showing on the screen here an interactive report using Grafana.

Grafana, if you haven't used it, it's an excellent open-source solution. They offer interactive data visualization. It comes from Grafana Labs, but it lets you see all this data via charts and graphs that are very professional-looking. You can see there are almost unlimited types of data to display here. You've got your bar graphs, and pie graphs. There's very there's almost an unlimited selection of types and they're very fancy.

So how do these work? You simply connect this graph up to that database, so you just provide your connection string, and then you would write your simple query, or maybe you have a complex query, but whatever that query is to get that data, and you're pretty much in business so it's as simple as writing that query and filtering that data and this report just comes to life.

So this is literally; if you have experience with Grafana, you can get these reports set up literally in minutes. It's really impressive what's available today. So some of these are even interactive within Grafana as well. I'll show a little bit of that here. And the only thing we've talked about, no cost, low cost, the only thing that might be a cost, at least in this, like the scenario that we have, is to run a lot of this, we do require a Windows server.

So we need somewhere to be able to host this stuff. So whether you're, unless you're hosting something in the cloud, depending on your cost structure, in our case, we have a local server, we're actually hosting everything on the one server. So we've got the SQL Server Express running.

We have .NET, a web server running. We also have the Grafana reports running from the server. And then the last thing is I mentioned we had some screenshots where we have images of those failures. We're also logging those on the file system on that server. So that is the one thing that might, there might be some costs with is actually spooling up somewhere actually to store and run these tests from.

So what does the report look like? So I've got a sample report here, and this isn't as fancy looking as some of those Grafana reports that we just looked at, but what we're looking at here is a simple test execution report. And this is one that we put together, it's probably 14 years old now, and this was actually built within Visual Studio. It's a simple grid layout, so nothing too fancy, but it displays the basic data.

People have been using this without complaint. It's been very helpful. And we just haven't taken the time to make it look very modern because it's been working so well for so long. So what we have here, you're seeing a screenshot of our application that's actually called DDFE. So that stands for Data-Driven Frontend.

And while I'm bringing this to you as a test reporting solution, this particular application has a lot of different features. So it originated life as a kind of storing objects for test automation, your page object model, rather than having those coordinates and X paths in text files, it actually houses those in a database.

So that gave us a lot of flexibility where we had a lot of tests that ran many, many minutes at a time. So rather than stopping runs if we ran into an Xpath coordinate issue, it gave us the flexibility to just update it in the database on the fly, and all of your tests are able to start running and passing with those revised coordinates.

But that's just one of the features here. Reporting is one of the bigger ones. You can see down at the bottom left it says it's a component of the Raptor framework, which is the name of our test automation framework. And this is really the core that brings everything together. So if we look at the top here, we have GP, which is the line of business. These particular tests are primarily Selenium, so it's capturing that data here.

And then you get into, like, the project or the application name being converged. And these go back to those folders that I showed, or I should say the tables that I was showing, where you have the ability to put the projects, the folder, and the test set. So then the actual folder or release name is R4D1. So this is giving you the name of the release. This is all just open text, whatever makes sense that you wanna see on the reports and however you wanna categorize your information.

So then the report is broken down by test set, right? So what's a test set? A test set is just a logical grouping of tests. So maybe you wanna isolate your user interface tests altogether or your unit tests as a test set. However, you wanna group that information.

Maybe you wanna run it based on who's executing them, which browser you're executing from, or whatever makes sense in your organization. The rest of the information is probably very standard and similar to other reports you may have, right? So you've got the total number of tests, a lot of the standard information, your pass or your fails, the percentages, and that's all fairly typical.

So we're displaying that here. Then some of the customizations we've made through the use of the database is the ability to link defects back to those executions. So for example, here, you can see we have a Jira defect and the JIRA number listed here. It's a clickable hyperlink, which I'll get to, and I'll show a live snippet here shortly. But you can see here it is linked to one test.

So that's what the one in the parentheses is denoting how many tests are linked to this defect. And then the overall defect count in this case is one. The other column that we've added to these reports, which has been extremely helpful, especially to the management team, is called unresolved.

And this is kind of a play-off of the percent run and the percent pass columns. So in this data example that we're sharing here, you can see that we have run all of the tests, and we have two failures across all four hundred and what is it four hundred and ten tests.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, this really looks like a game changer. I think it's pretty basic, but somehow people don't think this through while they are preparing the reporting mechanism. But having something as an unresolved column can actually help you narrow down and look at things from a priority perspective as to where you need to put bandwidth on the first basis or something like that.

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - And that's one simple column. I've used this column throughout my career in different organizations. We found that it has huge value, right? So, in this case, this test failed, and it shows a failure here, but since there's a defect linked, it's no longer unresolved.

Because at the end of the day, when you have an issue, it's usually one of two things, right? You either have a bug, and you need to open up a defect, or it's probably a test script issue, and you need to either update the test script or test data. And until that's done, there's still work to be done before you can call that test cycle complete.

So the unresolved shows that, hey, we have a failure, we don't have a defect yet, so you still have this action item left. So that's what the project managers, when they're looking at this thing daily, they're looking at this number right here and that's it.

Harshit Paul (Director of Product Marketing, LambdaTest) - Mm-hmm. That certainly put things in perspective, yes. Ha ha.

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - And the nice thing that we've gotten that little bit different with this report than you're gonna get out of a lot of them is being centralized; we've actually enhanced the automation framework, so that it has the ability to log to this execution report over and over.

So, a lot of these high-level tests are UI-based, and they take many minutes to run in some scenarios; they're very complex tests. We don't want to rerun all those tests after they've passed, right? You go back, these aren't test cycles that are going to take a day or two. They're usually a week or two test cycles.

And so you want to go back and rerun all of your tests. In this case, this test set had 43 tests in it. And rather than changing your XML file to rerun those tests and just run that one failure at runtime when you kick off the 43 tests, it'll go into the database and say, oh look, those 42 have already passed, I'm just gonna skip them, and it'll just go right to that one test that needs to run and execute it.

So that's how we're getting this unresolved count that keeps going down, is because we're rerunning tests over and over and working this test cycle till it's 100% complete. So the possibilities, once you pull that data out and have it accessible via a database, are kind of endless, right?

So we still have traditional pipeline executions where you run a suite of tests and it's done, and it's categorized, and you don't go back to it. But if you wanna run and keep rerunning to completion, we have a scenario where you can log directly to these test sets.

And then, of course, you're going to get that history with them as well. Again, a lot of these reports are going to look very similar to other execution history reports. The nice thing here is you can have any number of data points and data to save off. And in our scenario, like I said, as it transitions over test tools over time, the data is still there.

So we have never removed data in our 14 years. It's just there's a lot of data, but the fields are limited. So when you're doing that initial requirement with your team, figuring out what fields do we want to see, and what we want to record, just keep in mind how you want to create that schema and that database, right?

So put constraints on there. Put field lengths, right? You don't need a test with 500 characters in the name. May be limited to however many characters you're going to have. And then have those database constraints so your data doesn't get out of control. Things like here the error message, we limit this field to 255 characters because yes, your error messages can be a book, but we don't need all that in this report.

We just need something basic, and we can direct people to the logs if they want all those details. They don't need to be in the database. So the nice thing here too is we have that image file I mentioned, right? So it's a clickable link, and you can just get to that at any point over the history of the test.

You can see things like the execution environment at UAT. I mentioned how many times it took to run a test before it passed. You can see this test has taken a few times on average to run. So historical information helps you determine which tests are running well and which ones might have problems.

And the same thing, if there's a defect, those would be displayed here. I'll show that maybe in a minute here. But you'll have a running history over the life of a test, how those different defects to all the different systems, where they live? So let me do just a quick little demo here. Let me share my screen.

So here is a live view of that defect report that I was referring to. So you can see how the links become clickable. So if I click on this, it's going to take me to that defect. In this case, this is an example of a vendor defect system. So it's taking me to an external website. I'm not gonna log in, but it gives us the way if we had multiple defects or multiple systems to get to all of those from a single location.

And programmatically, we really just have a single table in the database that says, okay, if it's the vendor defect system, create the hyperlink here. If it's a quality center, create a hyperlink here. So we just have data to manage to build these links at runtime that gives us the ability to point and click and go to them. So it's a very convenient way of centralizing all of your defects.

Let me just click through, and show a quick example. So for example, the billing test set, there's 52 tests in it. And if you wanna see more details, you just click on the details link, you can see how quickly we can render it. And you're gonna get a lot of information. I'll add an app column here, but you can see how many times it took to run the test before it passed, right?

You can see all the different execution information associated with that test. And then if you want to see this one particular test, how is it done over the course of history, you can click on the history tab, and now you see all of the executions for the life of that test case. So a lot of information here. These tests were run locally, so we don't have any CI/CD columns being displayed because there's no data.

But we are showing the name of the project and the name of the branch helpful to some teams. And again, we're not displaying all the fields that I showed in the database, just the ones that are helpful in those areas and in different screens for reports you want to view.

So here's another example. We've got some, you can see lately, this is a daily. And actually, these are hourly reports that get run in our CI/CD process. And they've been failing here the last couple of days. So I have my team looking into those. But you can see if we tracked down a couple of days ago, they are running at 100% pass rate, so it turned green.

Here's our full regression for one of our projects, 103 tests. But if I click on one of these, I can see all the details again for these test cases. But if I come back here, you can see I mentioned you have all these different systems, right? You might have Qmetry, you might have TestNG. Now, while we're logging a lot of that information at runtime, you're not gonna log everything. So things like here, we have the build.

We have the build URL, I can click on it, and it's gonna take me right here to all of those executions. You might think, what's the point of building this whole infrastructure over top of that? But I'll tell you what, how long would it take you to go find your test executions and figure out which build they were in and where they were located?

Especially if you go back a week or two. It's gonna take you forever to figure out where these are at. So we link all of those together. It gives us a great way to go back and find all these different jobs and executions. So really, I mean, it comes down to having that database, whatever data you wanna display, it's very easy to access.

The other part, let's look at, these are all more older reports, they're just a grid system. So if I look at this page here, this is something we put together in a couple days, just real quickly, this is leveraging the Grafana reports that I had mentioned earlier.

So you can see there's a lot of different grids here. So this screen probably wouldn't be used like during a test cycle. This is more of an overall snapshot of your entire organization. In fact, this screen has some of the entire company's stats. So, total applications and total features. We're breaking these down by all the major divisions in our company.

So you can see there's about what, 14 or so divisions and how many applications each division has. We can get that at the corporate level. We can see things like trending rates here. We broke out the last three months of our pass rates so we can see how things are trending overall.

And we put in basically any data that teams or managers, especially QE managers were requesting, we put into this report here. We wanted just to give them everything, right? So not everybody's, I don't think anyone's gonna look at all of this, but different managers are looking at different bits.

So we just put in breakdowns of all this different data, breaking it out by apps, breaking it out by executions, and gave them all this different detailed information. But I wanted to show this grid at the bottom here. Again, this is just a Grafana report.

Ultimately, it's connected to a database through a connection string. And then there's a simple query controlling all this. But as soon as you turn that on, now you've got a grid that looks something like this where you can sort your data. I can come up here, and I can click on a particular column and sort data. I can come up here and do things like filters.

And there's no code behind any of this. It's just a query connected to a database. So they're very, very powerful, very interactive, and all this is with no coding. So in this particular report, we actually did couple it to some degree with the traditional application.

So this is the Visual Studio field. So we do have some drop downs where we can change them and it will re-render all of these reports to make them even more feature-rich. But you know, go as far as what would fit your organization. But the key is just having that data available in an external database.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, and I have to say, I'm sure the viewers would be feeling that as well. But the rate at which the data is running, it's pretty impressive. Especially when it comes to handling your databases, there's so much data that you have to accumulate and take an account for.

And to have that sort of reporting mechanism in place, which not just gives you the right info, but gives it fast, that's the actual value add-on. And I think we're able to get that while you're walking us through this. So I feel that that's my take, but I'm pretty sure the audience will be feeling the same as well.

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - Yeah, and I was kind of thinking this report seemed a little slow to run, but we're running all of these queries on here, and we've just dumped everything on one page. Normally, for most of your reports, you're going to be having a single execution or a single test report that you're going to be rendering. And those, if I go back here, they're just extremely quick to render. Like boom, bam.

Harshit Paul (Director of Product Marketing, LambdaTest) - Exactly.

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - So, to just kind of summarize what we saw here today. So we put together full feature reports, and these are in real time. So we're logging the results at runtime, pass-fail information, and what other information we have back at runtime. And then the reports, when you generate those, those are always live and up to the date.

So rather than taking screenshots of the data and sending those in status reports, you're able to just send that little link. And as we were just discussing, clicking those links renders almost instantaneously. So it's a great tool that especially project managers love because they don't have to go back and ask questions.

Where are you today? Where are you now? They can just click on a link that they have access to at any point. Now, as far as access goes we've set up all of those basic reports that you saw there. I didn't log into any of those screens. Those all just rendered.

But if you were to go in and wanted to say, add a defect number or make an update, that's when we do have an LDAP login that you have to do. And we have everybody's information in a lookup table that has your role, right? So you have to have certain roles to do things like add a defect.

Maybe you have to be a QET member or whatever those roles may be in your organization. We do have those constraints to do update actions. But if you're just building something simple, just showing Grafana reports, if you wanna just have those open to everyone in your organization, there's no need to even have those roles in there unless you really have data that you're displaying that needs to be protected.

So this is a very low-cost solution or no cost. And the nice thing is the data doesn't change over time. So like I mentioned, we've been using this data for 14 years, we haven't had to scrub it or do any sort of data reductions because it just is well laid out enough that it is able to maintain your data over time and just keep a nice legacy system.

So what does that mean? That means that you have reduced migration costs. So as tools change, as repositories change, this piece can remain constant. So it's an easy way to integrate multiple defect systems and report on them from a single location, and also assist other testing tools you may have, or reporting tools such as Qmetry or TestNG, Git reports.

You can bring those all into a single report or link out different pieces to them. And best of all, this is a customizable solution. So tailor it how your organization needs it. And it allows for future growth, right? You're not being locked into a vendor solution with a limited number of fields or options. You can do basically anything with it.

So just sit down at the beginning, lay out what your end goals are and the fields you need to get there, and just build up something that's gonna be tailored, and suited to your enterprise. So again, yeah, thanks, thanks, everyone. I got my LinkedIn information here. If you'd like to give me a follow or if you have any questions or like follow-up, feel free to message me on LinkedIn, and I'll try to reply to everybody as soon as I can.

Harshit Paul (Director of Product Marketing, LambdaTest) - Yeah, and speaking of questions, I would have some for you as well. And we did touch upon these aspects while you were presenting as well, right? Speaking of various web automation frameworks in general, test automation frameworks rather, right?

So how do you plug these in, say, a web-based reporting solution? I mean, you did showcase how different JUnit, and TestNG were all a part of the picture. But especially speaking of how frequently the space is evolving. We have new automation frameworks coming into the picture as well.

So just quickly summarizing how folks can keep that in mind, you know, plug all the test automation frameworks in a way and manage web-based reporting solutions in a seamless manner. So the core of all of this would be being able to write back to the database, right?

So pretty much all of your different languages and solutions and frameworks support interacting with a database. So you can do that by simply doing a SQL update query and just passing in the fields. Or we're a big proponent of doing a SQL, doing a stored procedure, right?

So you can execute that and have a central point of maintenance of the procedure, and then your individual different test suites and Architectures can just call that with whatever parameters are appropriate for what type of test.

Harshit Paul (Director of Product Marketing, LambdaTest) - That helps. And since you, of course, mentioned that you've been managing this database for 14 years. So I can't help but wonder, how did you manage scalability requirements? Because there's always a growing set of volumes that keeps on expanding on your test execution data side. So how do you keep an account of that?

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - So, a couple of ways. So the best way is to limit the amount of data you're putting in there, right? Don't put in data you're not going to need. Don't put in huge strings. Like we're not attaching and capturing every single bit of failure information. We're just getting a very small snippet of that.

So we're keeping our actual data content very, very low. And then, if we need to get more information, we're leaving that in the original system. So we're linking out to that we do have the screenshots on the file server on that machine. And you'd think those are the biggest culprits of tying up space.

But I don't think we've done a cleanup effort even on those, as long as you have them in a reduced format, maybe a PNG or a JPEG. And the key is we're only taking the first failure, right? We're not saving failures from every single step because, let's face it, typically, the first failure is the main failure because once something's failed, everything's a lot of times gonna fail after that.

So just keep your reporting to a minimum. If you do get into space requirements, you could always go back in and data archive or data warehouse and a lot of different optimization solutions, but really take a look at those database structures and try to optimize and index all your data and layout.

Harshit Paul (Director of Product Marketing, LambdaTest) - Right. And you did touch upon the cleanup side of things as well. So how frequently do you plan these cleanup scans? Like considering somebody does end up putting extra information into the database, then how frequently do you have some sort of cyclic checks in place? Or how does that happen?

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - So we actually haven't really done a data cleanup effort because we haven't run into any scenarios. Even in 14 years, we've got about 350 QE team members that are logging on a daily basis all this data, but we haven't really went through a cleanup activity.

So what we try to do is limit the field sizes and really control the data going in on the front end. And whether that's through adding constraints on your database or limiting field sizes or in the stored procedure, we do a lot of data checks, right? So we check is this a valid value, is it within the acceptable limits, and we really try to cleanse that data at the beginning so we don't have to do that on the back end.

Harshit Paul (Director of Product Marketing, LambdaTest) - Wow, that is impressive. It goes to show how well-planned these movements are into the database. And that's it. And one more question. Speaking of, since you've used a database, folks might also have this question, if they have to switch to a new tool set or incorporate to a new vendor, how do they do that switch without worrying about data migration efforts?

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - So that's the key here is the data that you typically would migrate from another system. That data is now in your own isolated database. So you can change whatever tools, whatever vendors, you're not touching or moving that data for the test execution results. That's staying in your local database.

So really the only thing you'd have, and it's not really necessarily a migration, but you would just have to connect your new system to log that data going forward. So as soon as you bring up that new system going live into your on-test exit or finish-on-finish routine add that call to log the data.

Harshit Paul (Director of Product Marketing, LambdaTest) - Got it and regarding you know there are also some requirements where you do want to look at data in real-time how do you mitigate latency concerns on that behalf and provide a responsive user experience?

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - So a lot of those would come down to how the data is structured. So for example, on the database creation, we have a lot of indexes. If you have a lot of fields that you're using in queries to build them, make sure those are indexed properly and just make sure that your queries are efficient and effective.

So again, we really haven't had any issues with speed or latency, but if you are running into some of those issues, I would look at maybe some query optimization would probably be your quickest win, but archive the data, do some data compression. You can always do load balancing or caching or any of the traditional database data handling.

You could, what's the word, data warehouse a lot of it. Again, we haven't done that because we just haven't had any performance needs, but you could warehouse some of your data so that for those you keep those queries small or move them off-site so you're not querying the data from the same database that you're logging to.

But I really don't think unless you're just logging a ton of data or your database isn't optimized, I really am thinking 98% of the people that would be trying this are not going to be running into data issues. I would hope not.

Harshit Paul (Director of Product Marketing, LambdaTest) - And speaking of speed and talking about the entire overall span of time, 14 years and no performance glitches at all. So that talks about how future-proof the entire setup has been ever since you folks started, right?

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - Yeah, and I look back like I mentioned, I've been in this industry for over 25 years. And this isn't just implemented here at Lincoln. I've used a system like this throughout my career. Ironically, I actually built what is today known as the Quality Center before that even existed.

And a lot of the components that you're seeing in these reports were always built in there. So this report really hasn't changed in the course of my 25-year career. We've been reporting it. It's amazing, right? I mean, what stays constant for that long?

Harshit Paul (Director of Product Marketing, LambdaTest) - Right, and knowing that change is the only constant on that side, right? So speaking of future proof, this would be the last question from my end to you, Scott. And speaking of future proof, being future proof ready, what are your takes on the future when it comes to database-centered solutions to meet the ever-changing demands of the testing community?

Scott Paskewitz (Automation Architect, Lincoln Financial Group) - So I think it has a lot of value. Exactly yeah, so I mean, as tools change, right now, we're looking at a lot of AI solutions. Ultimately, you're still going to be recording that data, right? So at the end of the day, your management team is going to need to look at the data, and make a go or no-go decision, right?

So they're going to say, where are the problem areas? What worked, what didn't? So your core data that you're looking at, I still don't think that's going to change much over the course of all these changing trends and technologies, right?

End of the day, you have to say did this work or didn't it work, and if it didn't work, what were the reasons so this is really targeting helping those people determine should we go live so I don't see the core data changing all that much but if it does again it's completely open and you can tailor it to customize it to however your organization needs to view it or report on it.

Harshit Paul (Director of Product Marketing, LambdaTest) - And those were all the questions from my way off. Thank you so much for addressing them, Scott, by all means to our viewers and listeners. Feel free to sync with Scott on LinkedIn as well. You can see the scanner over here. You also have the LinkedIn URL.

So if you have any further questions, feel free to reach out to Scott personally as well. And thank you so much, Scott, for joining us for this session and taking us into a deep tour of how you are approaching towards database-centered solutions and building customized reports as per your solutions out of it, right?

And this was extremely insightful. I'm pretty sure our viewers feel the same way. And thanks to everyone who joined us for this LambdaTest XP series, where we bring you the latest trends, innovations, and conversations from the world of testing in QA. Feel free to subscribe to our LambdaTest YouTube channel as well, and make sure you check our previous episodes on the XP series as well.

As a matter of fact, Scott did touch upon Grafana Labs while we were onto this, and we happened to have a solid XP series done in the past as well with Marie Cruz, who joined us from K6 Browser, which is an open-source utility for load testing through Grafana Labs, and that was an insightful session which had a pretty unique name to it as well. It was Fast and Furious: The Psychology of Web Performance.

It was a very insightful session as well. So feel free to check all those episodes on the LambdaTest YouTube channel. Thank you once again, Scott, for taking time out of your busy schedule. This has been extremely helpful for me and for the audience as well. Thank you so much. And that is it for today. This is Harshit signing off. Until next time, Happy Testing.

Past Talks

Fast and Furious: The Psychology of Web PerformanceFast and Furious: The Psychology of Web Performance

In this webinar, you'll delve into the intricate psychology of web performance. Uncover the significance of prioritizing performance over design, understand why slow websites induce irritation, and examine the profound impact a 10-second response time can have on user satisfaction.

Watch Now ...
How Codemagic Mitigates Challenging Mobile App Testing EnvironmentsHow Codemagic Mitigates Challenging Mobile App Testing Environments

In this webinar, you'll learn the secrets behind how Codemagic, a cloud-based CI/CD platform, helps tackle the challenges faced by mobile app developers and QA engineers and pro tips for healthy workflow infrastructure.

Watch Now ...
Revolutionizing Testing with Test Automation as a Service (TaaS)Revolutionizing Testing with Test Automation as a Service (TaaS)

In this XP Webinar, you'll learn about revolutionizing testing through Test Automation as a Service (TaaS). Discover how TaaS enhances agility, accelerates release cycles, and ensures robust software quality.

Watch Now ...