Survey Report

Future of QualityAssurance

Overview Message...

We started the Testμ initiative to promote a discussion on the future of quality assurance. At the conference, we got a chance to listen to industry movers, thought leaders, and experienced software experts.

Along with those, we also wanted to give a voice to the community and get an understanding of what the future of quality assurance looks like in the year.

This survey was an attempt to do just that.

In this survey, we got 1615 respondents from 70 different countries sharing their insights on what the current quality assurance landscape looks like and what the future holds for in this domain.

Here are the insights from this survey:

Demographics...

Let’s start with demographics. Fortunately for us, we had a diverse mix of survey participants, belonging to all types of industries, company sizes, team sizes, experience levels, and geography.

This variety in respondents provided us with a rich, multifaceted understanding of the global QA landscape. However, it's important to recognize the inherent reach bias in any open-to-all survey, and though we have tried to mitigate these through our community efforts, there may be some skew or bias present in some sections of data.

Role Distribution:Test Engineers/QA Engineers constituted the highest percentage of respondents at 51.4%, followed by Test Architects and Test Leads at 15.1%.

Overall
Graph
Graph
Graph

Experience Levels: Professionals with over 10 years of experience formed the largest cohort at 31.80%, closely followed by those in the 0-3 years range at 27.90%.

Overall
Graph
Graph
Graph

Company Size: We found a notable distribution across small (1-100), medium (101-1000), and large (1001-10000+) organizations, with a majority of respondents working in large-scale companies (43.9%).

Overall
Graph
Graph
Graph

Key Insights...

QA Bandwidth

QA Bandwidth

Teams spend 10.4% of their time on setting up and maintaining test environments, and an additional 7.8% of their time on fixing flaky tests. This can be streamlined through right tooling.

Culture of Testing

Culture of Testing

71.50% of organizations involve testers in sprint planning sessions, signaling a substantial shift towards quality-focused development. However, small organizations are lagging in this metric with only 61.60% involving testers in every sprint.

CI/CD Adoption

CI/CD Adoption

89.1% of teams embrace CI/CD tools for rapid release cycles. Yet 45% of organizations are triggering their automated tests manually, and not leveraging CICD tools for running tests.

Test Prioritization Challenges

Test Prioritization Challenges

73.8% of teams run all automation tests every time. This brute-force system leads to higher developer feedback times. Lack of structured prioritization, or test orchestration system, poses risks by potentially overlooking factors like risk levels and customer feedback.

Test Intelligence and Analytics Gap

Test Intelligence and Analytics Gap

28.70% of organizations lack dedicated test intelligence infrastructure, with 19.16% of organizations lacking even basic structured reporting systems. It’s evident that there is a need for adopting better observability and analytics practices and tooling.

AI/ML Adoption:

AI/ML Adoption:

77.7% of organizations are using, or planning to use, AI tools in their workflows. This includes AI use in test data creation (50.60%), test log analysis and reporting (35.70%), and formulating test cases (46.00%). However, reliability (60.3% orgs) and skill gap (54.4% orgs) remain the biggest challenges in integrating AI effectively.

Culture of QA...

One of the key insights the survey unveils is the undeniable impact of organizational culture on the quality assurance process.

Resource Allocation...

Quality of digital experience is becoming more and more important in today’s digital-first economy. This is also evident in the fact that more and more organizations are spending significant resources on quality assurance of their digital experience.

Percentage of Development Budget Allocated to Testing...

  • 40% of large scale companies are spending more than 25% of their budget for testing, with nearly 10% of enterprises spending more than 50% of their budget on testing. This showcases how important quality is for all organizations.
  • The interesting finding, however, is that about 26.2% of Mid-scale organizations do not know how much budget is allocated for their testing needs. This may either be a lack of interest in knowing about budget allocation, a lack of transparency in the organization or could also be a process issue. All three are important issues.

What percentage of your development budget is allocated to testing?

Overall
Small
Medium
Large
Graph
Graph
Graph

Colon

AI-powered code generation is a powerful ally for testers, enhancing efficiency and allowing us to focus on strategic aspects of testing rather than mundane tasks.

Senior Test Engineer

More on AI : AI/ML in Testing Arrow

Ratio of Testers and Developers in a Project...

  • The majority of organizations, especially medium and large ones (58.30% and 55.20%, respectively), report having 1-3 QA Engineers per 10 developers. This indicates a standard industry practice of maintaining a moderate number of QAs in proportion to developers.
  • Smaller organizations tend to have a higher ratio of developers to QAs, with 25.60% reporting less than 1 QA per 10 developers. Consistent with the fact that smaller organizations have resource constraints or differing operational scales where there are more generalist roles rather than specialist roles.

What is the ratio of the number of testers or QA Engineers to software developers in your project?

Overall
Small
Medium
Large
Graph
Graph
Graph

DevOps Team Allocation for Testing Infrastructure ...

  • 79% of respondents say they have a team of up to 5 DevOps/Infrastructure members to set up and maintain test infrastructure. It signifies the effort for all organizations to maintain a stable testing infrastructure.
  • 11% of large organizations have dedicated 10+ DevOps/Infrastructure team members to set up and maintain testing infrastructure due to complex and multi-environment set-up for their testing needs. The consideration of cost versus resources plays a pivotal role in this strategic decision-making.

How many devops/infrastructure team members are allocated to setup and maintain testing infrastructure?

Overall
Small
Medium
Large
Graph
Graph
Graph

Time Spent by Testers ...

Time Spent on Test Activities ...

  • The survey shows that teams spend an inordinate amount of time on test execution monitoring. Which is even greater than test authoring. In addition, more than 10% of their time is spent on test infrastructure management and maintenance. Both of these challenges can be mitigated through the right tooling like LambdaTest, which helps cut down test execution times thereby reducing test monitoring requirements, and eliminating time spent on maintaining and scaling up test infrastructure.
  • Teams are struggling with flaky tests even at the enterprise level, where they spend more than 8% of their time fixing flaky tests. This is where AI-based tooling like LambdaTest’s flaky test detection can help out and save valuable time.

How much time in % do you spend on the following activities?

Overall
Small
Medium
Large
Graph
Graph
Graph

Culture of Testing...

Test Engineers involved in Sprint Planning...

  • 70.5% of the organizations actively involve testers in every sprint planning. In large enterprises, this is even higher with 74.4% of organizations involving testers in every sprint. This high percentage is a good sign indicating the importance and need for quality and quality assurance processes, especially among enterprises.
  • However, there are still Around 7% of organizations where testers are never involved in sprint planning. This number is higher in small enterprises with 10.4% of organizations not involving QA teams in sprint planning.

How often are testers in your organization involved in sprint planning?

Overall
Small
Medium
Large
Graph
Graph
Graph

Contribution to Automation Tests...

  • As long as automation testing has been around, there has been debates on who writes automation tests, developers or dedicated automation testers. Some time back there were even calls to make developers the sole writers for automation tests. However, the data does not say the same. In most companies, both small and large, there are either dedicated SDETs who write tests, with 39.3% of organizations having dedicated SDETs, or there is a collaboration between devs and testers to write tests, with 38.6% of organizations.
  • In 13% of smaller organizations, developers are solely responsible for writing automation tests. In addition, we also saw that smaller organizations tend to have a lower number of testers per 10 developers as well. Combining these insights, it becomes apparent that in smaller organizations developers wear multiple hats, with more generalizations and fewer specializations.

Who writes automation tests?

Overall
Small
Medium
Large
Graph
Graph
Graph

Release Cycles...

Release Frequency...

  • Our previous data shows that over 88% orgs have adopted CI/CD tools. This has enabled them to release fast with 20%+ organizations releasing every day and 40% releasing weekly. This is especially true for small and medium companies which naturally have more agile teams.
  • Large-scale enterprises, on the other hand, are still slow even though they have adopted CI/CD more. With nearly half of the organizations releasing Monthly or Quarterly. While release cycles are highly subjective from product to product, long release cycles are indicative of a large turnaround time for fixing bugs.

What is the frequency of production deployment/feature releases?

Overall
Small
Medium
Large
Graph
Graph
Graph

Test Case Execution...

  • Even though we see a lot of claims about cloud adoption in testing, we see about 48% of the organizations still prefer Local Machines or self-hosted In-house grids to execute test automation. Which leads to a lot of challenges like high flakiness, scalability issues, and a lot of time spent on test infra maintenance.

How do you execute your automated test cases?

Overall
Small
Medium
Large
Graph
Graph
Graph

State of Testing...

State of Test Infrastructure...

Multiple Framework Strategy...

  • There have been a lot of debates and dissertations circulating on what framework to choose for test automation or why some frameworks are better than others. However, data shows that most organizations do not prefer only a single framework strategy. Choosing the right framework is dependent on a lot of factors and it’s not necessary to stick to one tooling only. It’s visible in our survey as well with more than 74.6% of organizations using 2 or more frameworks for their automation, with 38.6% of organizations using more than 3 frameworks. These organizations likely recognize the complexity and diversity of their testing requirements, warranting a varied toolkit to effectively address each testing challenge.
  • Another Interesting data however was that 23.5% of the organizations are using Selenium, Cypress, and Playwright at the same time.

* We had asked them to pick from multiple frameworks including Selenium, Cypress, TestNG, Cucumber, JUnit, Appium, Cypress, WebDriverIO, PlayWright, Mocha, Jest, XCUITest, Espresso, Puppeteer, Selendroid, Robotium and option to add others.

Number of frameworks/tools used in browser/mobile automation

Overall
Graph
Graph
Graph

App Testing ...

Number of digital devices for testing

Overall
Small
Medium
Large
Graph
Graph
Graph

Legacy Browser Testing ...

  • 19.30% adopt a testing strategy covering the five most recent browser versions and legacy ones. This comprehensive approach aims for a consistent user experience, recognizing the diversity of the user base.

Number of browser versions to test

Overall
Small
Medium
Large
Graph
Graph
Graph

Mobile Device Testing ...

  • 33% of organizations said they use both Emulators/simulators and real device when testing for handheld devices.
  • 25% of companies still use Browser mobile viewports to conduct App testing for their Mobile apps. Relying solely on browser viewports may result in a higher risk of missed issues, especially device-specific bugs.

When testing for handheld devices, where do you test most often?

Overall
Small
Medium
Large
Graph
Graph
Graph

Colon

In the era of AI, testers become orchestrators, guiding the machine to generate code, optimize tests, and enhance quality. It's a collaboration that propels us into the future.

Handheld Device Diversity ...

  • About 81.1% of the respondents state they use less than 10 devices to test their applications. There is a potential for organizations to utilize cloud-based device platforms to perform tests on multiple handheld devices to test their applications.

How many different handheld devices do you test on?

Overall
Small
Medium
Large
Graph
Graph
Graph

State of Continuous Testing ...

Testing in Concurrency ...

  • Around 44.1% of organizations are running at least 5 parallels to meet their test execution needs. This is a benchmark metric for organizations to consider while choosing parallel testing for their needs.
  • What is surprising to see is that 32% of organizations are not running tests in parallel. They can improve their test execution time just by running more parallel tests.

On average, how many tests are you running in parallel at a time?

Overall
Small
Medium
Large
Graph
Graph
Graph

Test Execution Time ...

  • 28% of large organizations in particular and 26% of mid-sized organizations spend more than 60 minutes executing their test builds. This can be mitigated through more parallelization or through the adoption of smarter tooling with features like smart waits and test orchestration to minimize this test execution time.
  • This also serves as a benchmark. If you are wondering how fast your test execution times should be, then it has to be under 60 minutes as most organizations are running tests under this time limit. If you are taking more time to execute tests, then you may need to rethink your test execution strategy and try to cut down the time by either smarter test execution or scaling infrastructure.

What is the average time required to execute your complete automation test build?

Overall
Small
Medium
Large
Graph
Graph
Graph

Colon

While the possibilities of AI in code generation are speculative, its continued role in technological advancements and ethical considerations ensures its significant presence in the future of QA.

Adoption of CI/CD Tools ...

  • 88.9% organizations use CI/CD tools to test or deploy their apps which is even higher in large scale organizations reaching upto 92.6% denoting high adoption of CI/CD tools.

Do you use CI/CD tools to test or deploy your app?

Overall
Small
Medium
Large
Graph
Graph
Graph

Automation Trigger Mechanisms ...

  • Although around 88% of organizations say they use CI/CD tools in their organization, about 45% trigger tests manually. Continuous testing is not just writing automation tests, it is the automation of the complete process. That means less human intervention. So manual triggering of tests can be eliminated by these organizations as well for more efficiency.

How are automated tests triggered?

Overall
Small
Medium
Large
Graph
Graph
Graph

Responsibility for CI/CD Testing Integration ...

  • 46.4% of the SDET’s and QA engineers are involved in integrating automation tests on CI/CD pipeline which means they should be upskilled to handle these integrations and should be provided with the right toolset for the same.

Who integrates automation tests on CI/CD pipelines

Overall
Small
Medium
Large
Graph
Graph
Graph

Test Orchestration ...

  • Running tests brute force on a first-come-first-serve basis is not the most ideal way to run tests. Smart test orchestration is required to run tests most efficiently for faster execution times and for better developer feedback times. Yet 36.5% of organizations are not orchestrating tests in any way.
  • In addition, 44% of organizations are orchestrating tests via CI/CD tools or frameworks themselves. Which can be made even better through dedicated test orchestration and execution platforms like HyperExecute.

How are you orchestrating your automated tests

Overall
Small
Medium
Large
Graph
Graph
Graph

Test Case Prioritization ...

  • 52.5% of organizations prioritize their testing based on the criticality of the feature/functionality and hardly 5.5% prioritize test cases based on past test runs and customer feedback. Organizations prioritize testing based on perceived critical features and only a smaller percentage actively incorporates insights from past testing experiences and direct customer feedback into their testing prioritization.
  • 21.5% of organizations run tests without any prioritization which means there is a scope for optimizing test execution for faster results and faster developer feedback.

How do you prioritize test cases during test execution?

Overall
Small
Medium
Large
Graph
Graph
Graph

State of Test Analytics ...

Bug Tracking in Production ...

  • 58.8% of the organizations state they identify up to 10% of their bugs in production environment. This metric serves as a benchmark for organizations. If the percentage of bugs identified in production exceeds 10%, it indicates a necessity to optimize your testing processes.

How many bugs are identified in the production environment?

Overall
Small
Medium
Large
Graph
Graph
Graph

Mean Time to Detect Testing Anomalies ...

What is the mean time to detect a test failure?

Overall
Small
Medium
Large
Graph
Graph
Graph

Test Failure Resolution Timeframe ...

What is the mean time to fix a test failure

Overall
Small
Medium
Large
Graph
Graph
Graph

Test Observability Metrics ...

What metrics and KPIs do you track as part of your test observability?

Overall
Small
Medium
Large
Graph
Graph
Graph

Colon

The effectiveness of Test Optimization Tools (TOT) will be significantly enhanced with the integration of AI, ushering in a new era of more efficient testing processes.

Flaky Test Detection ...

  • Flaky tests continue to be a challenge for organizations with 58% teams getting more than 1% flaky test runs, with more than 24% large organizations getting more than 5% of their tests as flaky. Better tooling for identifying flaky tests is required.

How many tests run on average give flaky results

Overall
Small
Medium
Large
Graph
Graph
Graph

State of Test Intelligence Toolset ...

Test Intelligence Tools ...

  • 71.4% of the organizations said they have either in-house tools, open-source tools, or commercially licensed tools they use for test intelligence and analytics.
  • 28.60% of organizations lack a setup for test intelligence and analytics.

What tools does your organization use for test intelligence and analytics?

Overall
Small
Medium
Large
Graph
Graph
Graph

Test Reporting ...

  • 30.07% of organizations favor open-source platforms for generating and managing their test reports.
  • 19.16% do not have a structured reporting system in place or do not use any tool for reporting suggesting room for improvement in test reporting practices.

Which reporting platform do you use?

Overall
Graph
Graph
Graph

AI/ML in Testing...

State of AI/ML in Testing ...

Gen-AI Tools ...

  • 80.2% of organizations use text generation tools like ChatGPT, BingChat indicating a widespread adoption of GenAI platforms.
  • After text generation, code gen tools are the most favored GenAI tools among the people surveyed, with 44% having used some form of code-gen tool like GitHub Copilot, OpenAI Codex, AlphaCode, and other code-generating tools. This reflects the popularity towards leveraging AI for coding tasks, potentially aiding in faster development and requirements for faster testing.

What types of generative Al tools are you familiar with?

Overall
Small
Medium
Large
Graph
Graph
Graph

AI/ML Tools in Testing ...

  • Surprisingly, the heaviest use of GenAI is in the form of test data generation, with more than 50% or teams generating test data using AI.
  • Followed by test data, Test case creation was the most sought after usecase of AI, particularly among medium and large organizations (around 48.80% and 48.60%, respectively).
  • In terms of cognitive AI based use cases, analysis of test logs and reporting is most prominent, especially in large organizations (37.90%), followed by using AI for visual regression testing.
  • 34.8% of larger organizations have adopted AI for Visual regression testing.
  • 26.4% of medium-sized organizations are not planning to use AI in testing, highlighting a considerable chunk of the segment still on the fence about the efficacy of AI tooling.

How do you leverage AI/ML in your testing processes?

Overall
Small
Medium
Large
Graph
Graph
Graph

Adoption of AI ...

Benefits of AI in QA ...

  • 25.60% see AI as a means to bridge the gap between manual and automated testing suggesting an expectation that AI will not only help but also aid in professional growth.
  • 29.90% believe that AI can enhance productivity in software quality assurance. This indicates a strong belief that AI can streamline the QA process and significantly boost output.

What do you think would be the biggest benefit of AI in the software quality assurance process?

Overall
Graph
Graph
Graph

Availability of Training Resources for AI Testing ...

  • 33.3% of organizations feel that training resources are not sufficient and 28% say they are limited. This indicates a perceived gap in adequate training content for AI in testing.

Do you think there's a sufficient amount of training available for testers to adapt to AI-driven testing methods?

Overall
Graph
Graph
Graph

Future of AI in Testing ...

  • 60.60% of organizations believe that AI will improve the productivity of teams, and humans will continue to play a major role in testing. This suggests a widespread view that AI will be an enhancer rather than a full replacement in the testing process.

Do you believe testing can become entirely AI-driven in the future?

Overall
Graph
Graph
Graph

Challenges with AI ...

Factors Hindering AI Integration in QA ...

  • 60.3% of organizations say the most significant concern is the reliability of AI platforms for the process of quality assurance. This reflects apprehension about AI's consistency in delivering accurate results.
  • 54.4% of organizations say there is a lack of skilled professionals in the field of AI. These highlight the need for more skilled professionals in this evolving field.

What do you think are the main obstacles when integrating AI into quality assurance processes?

Overall
Graph
Graph
Graph

Methodology...

Response Logging

We received 2300+ responses from 78 different countries, however we cleaned the data down based on several parameters to ensure accuracy and reduce spam. The data was cleaned based on the following parameters.

  • Removed duplicate responses and kept the response that was most complete.
  • In case the responses were filled with identical IP addresses and had upto 80% similar responses, we kept only one response and removed the rest.
  • Survey that had conflicting responses. For example responses that had total working experience as 15 years but age group as 18-25 years.

Based on this we reduced the number of qualified responses to 1615 from 70 countries.

Targeting

The survey was opened for responses from 21st July 2023 to 31st October 2023. Over all the survey was open for responses for around 102 days.

The survey was part of the Testμ 2023 Conference and a large number of responses were recorded from conference registrants and attendees.

Apart from that, we fielded the survey among

  • LambdaTest users,
  • LambdaTest’s Coding Jag Newsletter subscribers,
  • LambdaTest blog posts,
  • LambdaTest’s social media channels,
  • Tester and developer based user groups,
  • Online tech communities,
  • Online slack and discord channels,
  • Reached out to our connections over direct messages,
  • and we asked our respondents to share the survey with their peers.

We received a significant number of responses from countries like Argentina, Brazil, Canada, France, Germany, India, Japan, Mexico, South Korea, Spain, the United Kingdom, and the United States.

Survey Bias

While we have tried to reduce the bias by targeting and reaching out to a very diverse set of software professionals, there would be some bias inherent in the sample size. For example, as the survey was not localized, there were a lot more responses from English speaking or English educated countries.

The software development and quality assurance ecosystem is evolving at a rapid pace, which may have created a bias in between response time as well, especially for questions related to AI adoption.

Future of Quality Assurance Survey 2024

We will continue to improve our methodologies based on feedback received and incorporate all valid suggestions for 2024.

If you have any suggestion for the Future of Quality Assurance Survey 2024, feel free to reachout to us at support@lambdatest.com.

If you want raw data to deep dive into insights from your own, reachout to press@lambdatest.com or marketing@lambdatest.com and we would be happy to collaborate.