XP Series Webinar

Fast-Tracking Project Delivery: Tips from a Recovering Perfectionist

In this webinar, you'll learn tips from a recovering perfectionist on how to streamline bug reporting, provide developers with clear information, centralize bug tracking, and promote collaboration among project stakeholders.

Watch Now

Listen On

applepodcastrecordingspotifyamazonmusic
Antony

Antony Lipman

Customer Success Manager, PractiTest

WAYS TO LISTEN
applepodcastrecordingspotifyamazonmusicamazonmusic
Antony Lipman

Antony Lipman

Customer Success & Training Manager, PractiTest

Antony Lipman is the Customer Success and Training Manager at PractiTest. He has over 20 years of experience in managing customer relationships across a variety of industries. Antony provides customers with continuous training aimed at sharpening their testing expertise to help maximize their PractiTest value. Prior to joining PractiTest, Antony held positions in adult education, consulting, and the nonprofit sector, eventually transitioning into the technology space.

HARSHIT PAUL

Harshit Paul

Director of Product Marketing, LambdaTest

Harshit Paul serves as the Director of Product Marketing at LambdaTest, plays a pivotal role in shaping and communicating the value proposition of LambdaTest's innovative testing solutions. Harshit's leadership in product marketing ensures that LambdaTest remains at the forefront of the ever-evolving landscape of software testing, providing solutions that streamline and elevate the testing experience for the global tech community.

The full transcript

Harshit Paul (LambdaTest) - Hello, everyone, and welcome to another exciting episode of the LambdaTest Experience (XP) Series. Through XP Series, we deep dive into a world of insights and innovations featuring renowned industry experts and business leaders in the testing and QA ecosystem.

I'm Harshit Paul, Director of Product Marketing at LambdaTest, and I'll be your host for this session on fast-tracking product delivery tips from a recovering perfectionist, where we’ll unravel the major complexity of the software development or testing cycle that is a journey on how to fast track your entire project delivery process.

Joining us today is Anthony Lipman, Customer Success and Training Manager at PractiTest. Anthony brings over 20 years of experience in managing customer relationships across various industries. At PractiTest, Anthony provides customers with continuous training to sharpen their expertise, helping them maximize their QA needs.

Hi, Anthony; so glad to have you here. And how about you let the audience know a little bit more about yourself?

Antony Lipman (PractiTest) - Hello, everyone, and thank you, first of all, for inviting me to take part in this XP webinar. It's a pleasure to be with you today. I've been in my present job for over six years as a customer success manager at PractiTest. I help customers with working through using test management software, calling on experience that I have from working in the testing arena.

In fact, that wasn't my original position. Originally I started off in education 20 years ago, and I love informal education rather than formal education. I never would have ended up teaching in a school because I spent the first part of my life going through school and trying to get out of school, so there was never a chance that I was going to go back to school. However I developed over time a love for educating and helping people to try and achieve their potential and get what they needed from whichever area it was I was in.

When I came to Israel 15 years ago, I was in adult education and Israel's full of adult educators. So I needed a new profession. And so it happened to be the kindergarten teacher's husband who was in high tech and looking for a job as one does. And I said to, you know, any jobs going in high tech? And he said, well, I'll ask my company, and maybe something will be available.

And there was a job in QA. So I went to an interview there, having absolutely no experience of QA at all, being completely green. And they put up for me the interviewer puts up on the screen a picture of a login screen and says, how would you test this screen? It's got username, password, forgot password, or login button. And so I went through quite difficult sort of things I'd think of the questions that I'd ask of this login screen to try and see.

And it turned out to be enough, and I got the job. I worked for three years almost in this company, working with it in the days when telephones were still on the telephonic side of things rather than on the data side. And they had a Skype solution that used the telephone rather than using data as the main system. And so it was cheap for the telephone, expensive for data, and they managed to allow you to use the telephone and not the data.

But while I was there, I found myself spending more and more time talking to the support team and working with support. And I really enjoyed helping them answer their questions. And so when I moved on from there to another company, I had a new position, which was a mixture of QA and support. And so I was working on both sides.

And then in my present job, I've now moved over to full customer success, which of course, is a new area and comes about as a result of SaaS software needing to be able to give loyalty to our customers and ensure that our customers are successful in using the software with the old systems. You worried about your customer at the beginning of the sales cycle and that the renewal cycle and never in between.

But now, of course, when people can so easily switch between one piece of software and another, we have to look after our customers. And it goes beyond straight support. We do strategic work with them, helping them to develop their use of processes. And, of course, in an area like that, actual core knowledge of the system that you're using becomes vital. So, my previous knowledge of working in QA has helped me go to the company which I'm working for now called PractiTest, we do test management software.

With that, I'm able also to talk more about QA and develop my knowledge of QA and QA processes, much beyond what I was doing when I was actually just a tester. I see things from lots of different perspectives now. I see things from the end user's perspective. I see things from the tester's perspective. And, of course, I have to see things from the test manager's perspective, a position which I wasn't in.

But now I look at things from their perspective, talk mainly to them, discuss their processes, help them with using our software, and make sure that they achieve the best from it. And that's one of the things where I think that software differentiates itself from another. You can produce a good piece of software, but if you don't help people use it and don't educate them and discuss their problems with them and work through and help them find solutions, then that can be the difference between a successful company and a less successful company.

Harshit Paul (LambdaTest) - Well, that was great. And I'm pretty sure a lot of folks would be relating to the part of this as well. But certainly, your story goes on to prove that there is no right or wrong thing to interview that career transition talked about, especially if you're a working professional for nearly two decades, I mean, you've seen it all. Right. And to bring all that experience to the table on your current job experience and groom it at every step of the stage that you are getting at your job role. That is truly inspiring.

Having said that, in today's session, Antony will illuminate the path to smoother, faster, and more efficient software delivery. Since it's not just about finding bugs, it's also about optimizing the entire process. So as we deep dive into the session, think of it as a journey where you will learn how to stream bug reporting, communicate effectively with developers, centralize bug tracking, and foster collaboration among the team. So without further ado, let's jump into the heart of the discussion.

Antony, what do you think about software testing methodologies from a user perspective, make them more user-centric.

Antony Lipman (PractiTest) - It's interesting that you talk about this because I think that's one of the things that is changing. You talked about what's changed over the last 20 years, and I think that's one very much of the things that have sort of developed. And we're seeing it a lot in our company that we have customers who are involved either in deep software development or we have projects as well.

We have customers who are integrating ERP projects using test management software, and their approach is going to be very different from somebody who's software testing at, for example, a piece of software that they have actually developed. And I think this looking at the end user experience, is very much something that is involved in project development because a lot of the end users will often be the people who are actually in the testing environment.

So, for example, if somebody is implementing an ERP system, they've bought the system off the shelf from a developer, it's going to be customized and integrated into their system, then you're going to be looking at, you know, how do we test the system to make sure it works? And especially in cases like that, you're going to need to look from the end user's point of view. So the end user is actually going to be the customer who you have in the room down the hall from you. And you're going to say to them, firstly, give me your expertise. I need to know how to test this piece of software.

How would you use it? How would you actually use it? And so you start developing the flows that they're going to use. Are they going to be using a particular pathway, whether it's the happy path or a different path, and they are going to look at those routes that they're going to take, and then possible deviations from those routes that if either intentionally or unintentionally, are they going to press a button by mistake? Are they going to continue along the path that is predicted for them?

And obviously firstly, you're going to test that predictive path which they're going to use and likely to use. They may be testing it themselves, but you have to write the test cases to help them along the way. So you may have things that have been written by an external development company as sort of guide points or charter notes to be able to work with. But then you're going to be developing the exact tests in the way that they're going to need to test them. And so a lot of collaboration, I think, is necessary.

And we live in a more collaborative age now. It's something that you can talk about a lot in podcasts and webinars about collaboration. Shockingly enough, I was doing a webinar on exactly the subject last week. The one question that came up was we don't have much collaboration. And I was absolutely shocked that in this day and age, with all the tools, Slack, Jira, comments, mentions, and all of these things, we don't have any collaboration between our different teams.

Harshit Paul (LambdaTest) - Yeah, I think because, at this point, everybody's focusing too much on pace, right? And it's also about the tools at hand, but also about the process that you have in place with those tools, right? So as you said, we have so many tools that are leveraged these days, but we probably might be making use of them as well.

But there are still some C-load communications that are very often observed. Speaking of which, right? How do we best align these QA practices, say, with your CI/CD pipeline in a way that does not disrupt the pace of your development cycle? But at the same point, it is ensuring all your DevOps and QA are in place.

Antony Lipman (PractiTest) - Yeah, so again, that's another thing that has recently changed. You know, we started off with waterfall; you had three months, six months, however long it was to do a release. You knew your verified timelines. If it went a bit late, okay, we'll release it a week later. It doesn't really matter. We've got to send out the service packs and release the service packs. The customer doesn't know the exact date it will come on. We've got a bit of flexibility there. Then we move to agile in two-week sprints.

And at the end of those two weeks, you have to have something that maybe you won't deliver, but at least it has to be deliverable. And you have to have, if you've broken the system, you have to put it back together within those two, at the end of those two weeks. And now we move to DevOps, which is continuous delivery. And you've really sort of pushed things down. And what I like to explain with regard to DevOps is you've got this sort of infinity shape that you work on.

And so you've got your shift left and shift right. So on the left-hand side, you've got the development side of the circle. And on the right-hand side, you've got your deployment side in the production side. And so we moved testing instead of saying, well, we're going to test a specific middle point just before we release in a couple of weeks, a couple of months before we release in case of the waterfall, or in the last three days of an agile cycle, we have to test right through the whole process.

So as soon as the development cycle starts, you're going to start writing test cases, things which can be tested in development, you're going to test in development at that point, things which you need to test early because you think there's a big risk of there being lots of bugs or difficult bugs to resolve. You're also going to start trying to test early in the cycle. But then things which are minor bugs, maybe small UI bugs, you're going to say, well, let's leave them for production even.

Harshit Paul (LambdaTest) - Right, Thank you.

Antony Lipman (PractiTest) - Because if they could get away with being in production and they're not going to significantly affect the user experience, as long as they're not going to destroy the user experience, that can be done in production. You can even use your users as testers because your users as testers will see a bug and immediately report it to you. And unlike the days when you had these long development cycles, and it was three months later, if it's something really considered to be serious, I'm professional.

You can fix it within minutes. It can be fixed within minutes and then pushed within a couple of hours onto the live server. So you've got the ability to do things like hotfixes, or at least even if you put it into the next CI, the next continuous development deployment, which may be a week away, you've still got it fixed in a very short period of time compared to an old waterfall system.

So there will be things which you'll say, well, it's acceptable either to release with them. We know about them or release them in the hope that everything's going to be okay if we have risk-assessed the bugs, and we expect them to be low-risk bugs that we can then deploy with and fix later on.

Harshit Paul (LambdaTest) - Right, and as you rightly pointed out, QA is at the center of that infinity circle, right? And the life of an agile tester is really easy, right? Because it is, in a way, accelerating your product shipping velocity, right? And as a matter of fact, the plate can only be sold for a tester, right? You have new products coming in every week. There are already systems in place that you need to keep an eye on, right?

And sometimes it leads to the bigger issue, which is how much bandwidth can you fully utilize for your QA from your QA teams, right? So especially for young projects, when you are having limited resources or time constraints, which methodologies do you recommend for prioritizing test cases effectively without compromising on the test coverage?

Antony Lipman (PractiTest) - So I think that there has to be some kind of risk assessment here. There are many types of ways that you can do a risk assessment. One that I particularly like is the FMEA system, whereby you're doing things like looking at severity, priority, and likelihood of things happening. Now obviously, this sort of risk assessment you can't do yourself alone. This needs to be done by brainstorming with the subject matter experts, whether they be the developers or the users of the product.

And you need to sort of discuss these things with them and work out what is going to be the high risk or high severity, high priority risk that you could land yourself in. So you take each of these on a scale of one to five. So if you've got three areas that you're looking at, you could be saying we're going to look at severity, and a high severity bug is going to be loss of data.

A low-severity bug is that it's going to have a trivial effect. You can then look at priority and say, well, the highest priority is going to be an unacceptable loss of functionality or use. And the lowest is again going to be a very minor loss of functionality. And you score each of these between one and five. And then the likelihood, how many users is it going to affect? If it's going to affect all the users, then it's going to be one. If it's going to affect very small numbers of users or virtually never affect any users at all, then that will be a five.

And you take the three numbers, you multiply them together, and depending on the result that you get, you get a risk priority number, which you then determine how much you're going to do testing with these things. So if you have a very low-risk priority number, so for example, if all sections are one, it's going to affect all users, and it's going to be a major loss of functionality, and it's going to be possibly data loss as well in severity terms, then you might have a risk number of one which means you have to test extensively and test it completely, this particular function, and make sure that you're not going to lose any use at all through it.

Whereas if you get something which is, say, going to have 5 times 5 is 25, and then 125 at the opposite end of the scale, well, you're not going to test that at all. And you're going to say, well, if it's going to come out in production, then we'll fix it at some point in production. And then, on the scale between those, you're going to test based on those particular numbers, something relatively low, you'll test more; something higher, you'll test less, and it's going to give you less of a need to test.

So that's one method of risk assessment. We could also look at business value-based as well. You know, are you going to have a very high loss of business value from the fact that you're not testing extensively and finding bugs? And then the third one is to look at something like dependency-based. So you can look at all of these different things and say we're going to test on all of these bases and look at them to hopefully give us a more efficient testing process, something which we can manage at a more manageable level than saying, well I have to test everything all of the time.

Harshit Paul (LambdaTest) - Right. And certainly, that definitely helps. As you mentioned, prioritization is the key if you're looking to shift with pace without compromising on the quality of things. And Agile, as a matter of fact, has helped that tremendously and pretty much everybody is aware of that. Another major change that happened in the landscape is from an architectural perspective. We have seen a shift from monolithic dependency to more microservice-based architecture coming into the picture.

And with that, there might be some unique strategies or challenges that should be considered while testing microservices. Or you can say the containerized applications as well. What do you think should be kept in mind, particularly in terms of interdependencies and scalability?

Antony Lipman (PractiTest) - So I think one of the things with microservices, bringing in different services from different places, is that you're going to have to look at the integration side a lot. And work out when you integrate these tests, and these services together, are you going to have something which still works efficiently, or are you going to have bugs that come about as a result of the integration?

And so for things like that, the SIT, the integration testers and the UAT use of acceptance testing become very important. And so we talk about these by looking at who's going to be using the systems, who's going to be integrating them, and building up tests for each of them. And so you can look at the tests each of the individual suppliers gives you, but then you're going to have to give an integration level as well, which gives you a mixture of all these put together.

And one of the big challenges, of course, of doing things like SITs and UATs is you can have very large numbers of people who are actually involved in the testing cycle. And so, it becomes easier to use a test management system than to, for example, try and manage it in Excel documents where you may have a hundred or more testers who are involved in the testing process.

Harshit Paul (LambdaTest) - That makes sense, actually. And speaking of all these processes kicking in, one thing that everybody needs to make sure of is ensuring that the right insight intelligence is there in terms of QA analytics, right? So how do you recommend utilizing test insights and analytics in decision-making during the testing process?

Antony Lipman (PractiTest) - Right, so one side of using test management is to organize your tests and make sure that people are testing them at the right time when you need to. Having your regression suites organized so you can just pull up a regression suite rather than having to pull out different tests from different places and add them to a document and pass off the document to whoever it is.

But the other side of it is being able to get actionable results based on it and any test management system that is worth its salt is going to be able to give you that ability to share the insights that you're getting from the testing. First of all, to judge them by yourself. And then once you've done that, to be able to share it with upper management and say, at the end of the day, is this ready for release? Or do we have to do some kind of emergency fix on it or not?

And so when you're talking about time-critical phases, you might be looking at planned tests do I have planned to take place. Do they have specific dates when I'm meant to be doing each of these tests? Am I meeting my targets for them? Are there gaps in them? We could also look at testers and see how many tests each tester is doing and make sure that they're being load balanced so that you can balance your testers, sort of a round robin of your testers, so that if your testers are if one of them is being overworked or a group of them is being overworked can then say, well hang on, we need to move some of the load off this group because otherwise we're not going to finish in time, and we can share it with other groups as well.

And then, on top of that, we can look at more qualitative metrics as well. Are a particular group of testers or a particular testers, are we seeing statistically that they're more likely to fail tests? And if they are more likely to fail tests, why are they more likely to fail tests? Go and discuss with them their testing methodology and find out if it's just a matter of luck or bad luck, that they're getting more failures than the others, or whether they're testing in a different way that is causing them to fail more tests, whether it's a more efficient way, which is getting better results, or maybe they're over-testing something.

And maybe this also goes into the idea of the possibility that you've got requirements that might not be clear to them and that they're not necessarily seeing what the results should be. Do you need to tweak the way that they're testing and the results that they're getting from a system as well?

Harshit Paul (LambdaTest) - Yes, and there are all the more notorious cases of flaky tests, that's right, which nobody wants to face on top of it. And that they have it. So having good insights and analytics, I think that might help do better detection of flaky test trends, just to help people realize where they need to put their bandwidth better, which things they can park, which things they really need to put the accelerator on. That definitely helps. And I'm pretty sure, you know, anybody who's in QA as a dev as well, and this is a very all-around question.

Antony Lipman (PractiTest) - Yes.

Harshit Paul (LambdaTest) - We work on a certain set of requirements. They come from, say, customer feedback or client requirements that we have. And there's a certain sort of ambiguity that we get while we play these phases of communication, from design to dev to QA to prod. So how do you deal with these ambiguous requirements or these specifications as such?

Antony Lipman (PractiTest) - Yes, well, at a worst-case level, I actually have been in conversation for a long time with a particular person who was hired to manage a testing process for an educational establishment. Let's just leave it at that. And he said the requirements keep on changing. I feel like I'm either in quick sand, or else the goalposts keep moving on me. And he said it's impossible to be able to test based on this fluidity of requirements and the way they just keep changing.

And whenever I come to them and say, well, this requirement is not clear, they say, oh, well, just do what you think. And then, when it comes back two weeks later he says, well, I failed all the tests based on this. They said, no, you weren't meant to do that. And so it really sets a test manager in an impossible situation.

And I think the message from this, at the end of the day, is collaboration, that there has to be a collaborative effort here. A test manager can look at a set of requirements and build up a set of test cases built on it. If he sees that there's an ambiguity based on the actual requirements themselves, he can write test cases built on one particular direction, and then he needs to feed them back to project management.

But he has to be able to have a project manager who he can talk to discuss these things with. And, of course, that's only two sides of the story. There's a third side of the story as well, which is when the developer decides that he's going to develop something which is totally different from what either the tester or the project manager intended him to. And so again, it's this sort of three-way feed of trying to get everyone together and on the same page.

And, of course, that's something that Agile has tried to build on so well that you have these sorts of scrum teams with a small number of developers working together with a single or two testers and a project manager, all working in this sort of small environment together so that they keep up the communication and make sure that if there are any ambiguities in the specifications which are written, they're quickly ironed out before they become a major problem.

Because if you can iron out an ambiguity in the specification at the time when you're developing the specification, then that's great. If you do it when the developer is in the middle of developing the software, that's not great, but not disastrous. But when the developer has already finished developing the software and has then fed it back to you for QA testing, if the ambiguity is still there at that point, it's pretty much a disaster.

Harshit Paul (LambdaTest) - Do you think code review can help here if that is done on a pair basis where you can club QAs with developers while they're building something and have them look at what they're building while they're trying to revise their desk cases?

Have you encountered such a situation where you had to do the code review with someone who has been a developer, and you were there for quality assessment while the project was being built?

Antony Lipman (PractiTest) - So, I know that many Scrum teams have these small Scrum teams with just one or two developers, a project manager, and one or two QA people. And it happens quite often in companies that we deal with here that they do work on that sort of methodology. And, of course, it means that the QA team is working immediately from the time when the specifications are written with both the devs and the project managers as well to write the QA testing based on specifications before the devs start to develop the software and, with them, write through the process of the development during the particular sprint that they're working on.

Harshit Paul (LambdaTest) - Makes sense. So we've talked about best practices to implement, but in the end, it comes to execution, right? How do we execute that test? And pretty much everybody has this question while they're at the start of their QA process. And even if they have a certain QA process in place, they always think about transitioning from on-premise to a cloud-based environment.

And there are a lot of questions. How do you decide whether you should go for on-premise infrastructure or you should opt for cloud-based environments? So what is your take on that?

Antony Lipman (PractiTest) - Oh gosh, this is the biggest question in the book nowadays. Of course, anyone who says you want to go for an on-premise system usually quotes stuff like security at you. But with regard to security, unless you're in a mission-critical environment where you're talking about lives being at risk, you know, you're talking, for example, things like Boeing, they want to make sure that their airplanes when the actual computers inside their airplanes have no connections to the outside that anyone could interfere with and take control of the plane.

In most circumstances, the level of security which we can now provide with, I don't know, we shouldn't mention names probably, things like Amazon AWS servers, they are sort of top-level security now, and they have such limited access that it's tied right down.

And when you're dealing with companies, there are security protocols which you look for, things like SOC2 or, for example, ISO 27001, that you look for these particular certifications and reports, and you know then that you're in a pretty safe environment. So from the safety point of view, in most cases, on-prem solutions aren't so necessary anymore. Having said that, you look at the advantages of having a cloud-based solution.

And I think everything we've talked about with regard to DevOps and Agile is based on being able to work in a cloud-based environment. Because remember back, I'm a bit older than you. So when I had my first computer, it was Windows 95, Windows 3.1 even. I had a machine with Windows 3.1, it came with Windows in a box, and it had nine or 10 disks that you had to feed into the machine one after the other and load them in.

And you knew basically that you weren't getting another copy of Windows. You might get a patch if you were lucky six months a year down the line. Windows 3.1.1 when it came out. I don't remember if it had a patchable version or whether it was completely new, and you had to buy the software refreshed. But then you get to Windows 95, Windows XP, etc. And still being downloaded off DVDs or CD-ROMs, and when you put them in.

You have to wait for the next one to arrive before you can upgrade your system six months, a year down the line with service packs, etc. But now, with cloud software, we just send the command, and it's updated. And if there's a bug in the system, you couldn't use DevOps, for example, a DevOps process without having a cloud-based system, because it needs to be hot fixable and updatable at the moment.

And so I think that the advantages of a cloud-based system outweigh many times the on-prem solution. And of course, the other side of it is as well, that you don't need to invest in the infrastructure. Somebody else manages the infrastructure for you. You're not responsible for managing the servers and buying tens of thousands of dollars, maybe hundreds of thousands of dollars worth of servers.

You don't have to buy them, you don't have to maintain them, you don't have to update Apache on them every time that somebody puts a fix on it and improves it. You have your software as a service, and it's all ready for you to use it. And you just log in, and you're away with it.

Harshit Paul (LambdaTest) - Right, and that certainly helps, as you said, right? You have to, that allows you to keep your focus on running devising smarter tests rather than worrying more about the hassle of maintaining the existing infrastructure or building an entire one, which not only is more time consuming but also, you know, puts a hole in the pocket for some, right? So it's not suitable for everyone as well. So yes, thank you so much. With that, we conclude this enlightening session on fast-tracking project delivery.

A huge shout out to you, Anthony, for basing us on the insights and wisdom from your experience point of view. And this has been truly inspiring. Last but not least, thanks to our fantastic audience for tuning in. Hope you got to learn a lot from this session, and stay tuned for more episodes from the LambdaTest Experience Series in the future, where we will deep dive into the depth of tech intricacies and, until then, keep innovating and happy testing.

Antony Lipman (PractiTest) - Thank you, Harshit. It's been a pleasure to be with you.

Harshit Paul (LambdaTest) - Been a pleasure to have you here as well. Thanks for joining us.

Past Talks

Shift-Left: Accelerating Quality Assurance in Agile EnvironmentsShift-Left: Accelerating Quality Assurance in Agile Environments

In this XP Series webinar, you'll learn about 'Shift-Left: Accelerating Quality Assurance in Agile Environments.' Explore the significance of Quality Driven Development (QDD) and strategies for advancing QA processes and optimizing efficiency in agile software delivery.

Watch Now ...
End-to-End Test Automation with ProvarEnd-to-End Test Automation with Provar

In this XP Series webinar, you'll learn from Zac about the intricacies of 'End-to-End Test Automation with Provar'. Explore insights into efficient testing strategies for robust software solutions.

Watch Now ...
Man Vs Machine: Finding (replicable) bugs post-releaseMan Vs Machine: Finding (replicable) bugs post-release

In this XP Webinar, you'll delve into the dynamic world of 'Man Vs Machine: Finding (replicable) bugs post-release.' Discover effective strategies and insights on navigating the evolving landscape of bug identification beyond the development phase.

Watch Now ...