Software Testing Glossary: A-Z Guide To Testing Terminology

  • Learning Hub
  • Software Testing Glossary: A-Z Guide To Testing Terminology


When it comes to software testing, there is plenty of information available; it can be hard to know where to begin. If you're a novice in software testing, you've probably heard many unfamiliar acronyms and jargon. In order to expand your professional vocabulary, learning different testing terminologies is crucial.

This guide on software testing glossary covers some of the basic definitions for software testing and quality assurance commonly used by QA testers.

Let's begin!

A/B Testing

A/B testing or split testing creates at least one variant to test against a current webpage to ascertain which one performs better regarding agreed metrics such as revenue per visitor (for e-commerce websites) or conversion rate.

Acceptance Testing

The potential end-user or customers perform Acceptance testing to check if the software meets the requirements and can be accepted for end-use.

Discover how to Automate Acceptance Tests For Mobile Apps in our earlier tutorial on Acceptance testing.

Agile Development

Agile software development is a process of software development based on iterative development. In this method, requirements and solutions involve collaboration between self-organizing cross-functional teams.

Agile software development is based on repetitive and parallel development, where requirements are chalked and solutions are implemented through collaboration between cross-functional teams.

Check our article to explore further about Agile development.

Beta Testing

Beta testing is external user acceptance testing and is the last testing performed before a product is released to the audience. In beta testing, a nearly completed version of the software, the beta version, is released to a limited number of end-users for testing.

This beta testing method is performed to gain feedback on accessibility, usability, reliability, functionality, and other aspects of the developed software.

Want to delve deeper into Beta testing? Read our Mobile App testing tutorial on Testing Beta Applications.

Bottom-up Integration

In bottom-up integration testing, every lower-level module is tested with higher modules until all modules are tested. Then, it takes the help of Drivers for software testing.

BS 7925-2

BS 7925-2 is a Software Component Testing Standard. This standard describes the process for component testing using specific test-case designs and measurement systems. This will ensure improvement of the software testing quality and, in turn, improve the quality of the software products.


A bug is a problem causing a program to crash or produce invalid output. The problem is caused by insufficient or erroneous logic. A bug can be an error, mistake, defect, or fault, which may cause failure or deviation from expected results.


The CAST Certification exhibits a basic-level understanding of quality testing principles and practices. In addition, acquiring the designation of Certified Associate in Software Testing (CAST) demonstrates a professional level of expertise in the principles and practices of software testing in the IT profession.

Change Requests

Change requests come from stakeholders in the software development process who want to change something in a product or production method. Common change requests include defects and requests for product enhancements or new features.


The CMMI or Capability Maturity Model Integration is a structured collection of best practices in engineering, service delivery, and management. It aims to assist companies in improving their ability to deliver customer satisfaction through an ever-increasing understanding of their capabilities.

The framework organizes practices by the effectiveness of the practice itself ("capability") and by the implementation of structured combinations of effective practices within the organization ("maturity").


A software component is a composition unit with only contractually specified interfaces and explicit context dependencies.

Configuration Management

Configuration management is an engineering process for maintaining the consistency of a product's attributes throughout its life. In the technology world, configuration management is an IT management process that tracks individual configuration items of an IT system.


Commercial Off-the-Shelf (COTS) software is becoming an ever-increasing part of organizations' comprehensive IT strategy for building and delivering systems. However, a common perception held by many people is that since a vendor developed the software, much of the testing responsibility is carried by the software vendor.

Decision Table

A decision table is an excellent tool for testing and requirements management. It is a structured effort to break down the requirements when working with complex rules. Decision tables are implemented to model complicated logic. It helps show all possible combinations of conditions to be considered and what conditions are missed.


A defect is a discrepancy between expected and actual results in testing. A defect is a deviation from the customer's requirement. An error is found in the product itself after being shipped to the customer. A defect is an error found AFTER the application goes into production. A defect can be described as a variance between expected and actual.


A deliverable is something that's delivered. In software engineering, that's usually code or documentation. Of course, plenty of work makes the deliverable possible, but it isn't itself a deliverable, such as testing modules or researching the best way to do something.


A software driver is system software that controls the hardware device it is installed in. Hardware devices can be PCs, tablets, smartphones, and so on. It helps different hardware components attached to these devices to communicate with the operating system and other applications so that the components can function.

End-To-End Testing

End-to-End testing is a software testing technique that scrutinizes the functioning of a software/application from the start to the end. It dissects the overall flow of the software and how it functions in different environments. It also checks if the application flow is as expected.

For more details on End-To-End testing, check out our Complete Tutorial on End-To-End Testing.


An error is a difference between what the software is supposed to do or is expected to do and what the software does. A defect in the software can result in erroneous behavior.


Test execution is simply performing (executing) the tests to verify specific functionality. This could be either manual test execution - where a person follows all the steps documented in the test cases. Or Automated test cases - where the command is given to execute (run) the steps using an automation testing tool.

Expected Result

Test execution is simply performing (executing) the tests to verify specific functionality. This could be either manual test execution - where a person follows all the steps documented in the test cases. Or Automated test cases - where the command is given to execute (run) the steps using an automation tool.

Extreme Programming

Extreme Programming, commonly known as "XP" based on the initials extreme programming —is an agile method focused on software development. While scrum focuses on prioritizing work and getting feedback at the project management level, XP focuses on software development best practices. Therefore, there will be a lot of software references in the following discussion. In addition, XP values and practices can be applied to other knowledge work projects.


Factory Acceptance Testing (FAT) is used to verify if the newly manufactured and packaged equipment meets its intended purpose. In addition, the FAT verifies the system's functioning and ensures the customer's requirements have been met.

Functional Integration

Functional Integration associates products and services with an ecosystem that attracts and retains customers.

Functional Testing

Functional testing verifies that each software application function operates according to the requirement. This testing primarily involves black box testing and is not concerned about the application's source code.

Refer to our guide on Functional testing to learn how to utilize it for your web applications.


The brief information of all changes that happened to a test help users identifies the root cause of an error when such occurs.

IEEE 829

An IEEE 829 is a Software Test Documentation standard that specifies the syntax for the documents to be used in the different testing life cycle.

Incident Report

An incident report is a detailed description of the incident observed and contains data like Summary, Steps Used, Priority, Severity, No. of Test Cases Impacted, Status, Assigned To, etc. An incident report is essential as it helps keep track of the incidents and provides information to concerned people.


Inspection attributes to peer review of any work product by trained individuals who look for defects using a well-defined process. An inspection might also be called a Fagan inspection after Michael Fagan, the creator of an overall software inspection process.


Iterative testing is making small, gradual changes or updates to a product based on insights like test results and user feedback from earlier changes and testing them against predefined baseline metrics.


Maintainability refers to the ability to update or modify the system under test. This is an important parameter as the system is subjected to changes throughout the software life cycle.

Manual Testing

Manual testing involves verifying whether the functionalities are working as expected or not.

Intrigued to learn more about Manual testing? skim our Manual testing tutorial.


Meantime between failures (MTBF) calculates the average time between failures of a piece of repairable equipment. It can estimate when equipment may fail unexpectedly in the future or when it needs to be replaced.

Non-functional Testing

Non-functional testing is a testing term that capsulizes various testing techniques to evaluate and assess the non-functional attributes of a software application. This testing methodology's primary purpose is to evaluate an application's competency and effectiveness. In addition, non-Functional testing is required to check the system's non-functional requirements like usability, etc.

Operational Testing

Operational testing confirms that a product, system, service, and process meets operational requirements. Operational requirements include performance, security, stability, maintainability, accessibility, interoperability, backup, and recovery. It is a type of non-functional acceptance testing.

Pair Testing

Pair testing is a collaborative effort versus a single-person testing effort. Typically, one of the team members is a tester, and the other is either a developer or a business analyst.


Postcondition is a requirement that must be true right after the execution of some section of code. Postconditions are sometimes tested using predication within the code itself.


Priority is the order/importance of an issue/test case based upon user requirements, while severity is the impact of issue/failure of the test case will have on the system. Typically, priority is decided by the business analyst/client, and the tester decides severity as they have seen the impact on the system. This may or may not be followed everywhere.


Quality refers to the conformance to implicit or explicit requirements, expectations, and standards. To fulfill these requirements, a quality control mechanism is set up. Quality Control (QC) is how you achieve or improve product quality.

Quality Assurance

QA testing, or quality assurance, is the process of ensuring that the product or service provided to customers is of the best possible quality. QA focuses on improving processes for delivering quality products.


The tester retests the application, which was reported as a bug and is now fixed by the developer. This bug can be due to functionality issues as well as design issues.

Regression Testing

Regression testing involves changes to product/software to ensure that the older functions/programs still work with the newly added changes. Regression testing is an integral part of the program development process and is done by code testing specialists.

Streamline your development process with our Definitive Guide To Regression Testing.

Release Testing

Release testing tests a new software version (project or product) to verify that the software can be released. Release testing has a broad focus since the full functionality of the release is under test. Therefore, the tests included in release testing are strongly dependent on the software itself.


Reviewers are the domain experts who methodically assess the code to identify bugs, improve code quality, and help developers learn the source code. Two or more experts should review the code if the code covers more than one domain.


RUP is a software development process developed by Rational, a division of IBM. It divides the development process into four phases: business modeling, analysis and design, implementation, testing, and deployment.


A scenario is one usage example. A piece of software can probably be used for more than one particular thing. Each specific thing some software can be used for can be described with a concrete example. These examples are often referred to as scenarios.


Severity is defined as the measurement of a defect's impact on the system of the application/unit being tested. A higher impact on the system functionality will lead to assigning higher severity to the defect. The Quality Assurance Engineer usually determines the severity of the level of defect.


The shift left test strategy involves moving the test to the beginning of the software development process. You may reduce the number of errors and enhance the quality of your code by testing your project early and often. The idea is to avoid discovering critical issues when your code will need to be patched at the deployment phase.

Shifting left denotes returning to an earlier stage in the software development life cycle's testing phase. To put it another way, shift left. We don't want to take the strategy of merely testing after the software development life cycle. Shift left introduces testing in the earlier stages of software development rather than later.

Read more to learn how to Implement Shift-Left Testing.

State Transition Testing

State Transition testing is a black-box testing method implemented to observe the system's behavior for different input conditions passed in series. Both positive and negative input values are given in this testing, and the system's behavior is observed.

Structural Testing

It's testing used to test the structure of software coding. The process combines white-box testing and glass box testing, performed mainly by developers. The testing process intends to determine how the system works, not its functionality. Specifically, if an error message pops, there will be an issue. Structural testing helps to find that issue and fix it.


A system is a set of components formed for a common purpose. The word sometimes describes the organization or plan itself and sometimes describes the parts in the system (as in "computer system").

Test-Driven Development (TDD)

Test-driven development (TDD) is a transformational approach to development that combines test-first development. You write a test before writing just enough production code to fulfill that test and refactoring.

Test Data

Test data is input data to the system or software application used for testing it. We can differ test data to test the application to handle error conditions correctly. Therefore, QA should always provide different test data to test the application thoroughly.

Test Environment

A Test Environment is a setup for the testing teams to execute test cases. In other words, it supports test execution with hardware, software and network configured. The testbed or test environment is configured as per the need of the Application Under Test.

Explore more about Test Environment and its major features and elements.

Test Log

A Test Log is one of the crucial test artifacts prepared during testing. It provides a detailed summary of the overall test run and indicates the passed and failed tests. Additionally, the test log contains details and information about various test operations, including the source of issues and the reasons for failed operations. The focus of this report/document is to enable post-execution diagnosis of failures and defects in the software.

Test Plan

A Test Plan is a document to describe the testing objective and activities. The test lead prepares it, and the target audience is the Project manager, project team, and business(depends). The test plan clearly states the testing approach; Pass/Fail criteria, testing stages, automation plan(if applicable), suspension, resumption criteria, training, etc. It also includes the Testing Risk and Contingency plan.

Manage your tests effectively by referring to our guide on Devising A UI Test Plan.

Test Report

A Test report is a brief of objectives, activities, and test results. It is managed to help stakeholders understand product quality and decide whether a product, feature, or defect resolution is on track for release.

Test Specification

Test specifications are iterative, generative drafts of test design. It allows test developers to develop new versions of a test based on different populations at the item level. In addition, the specs serve as guidelines so that new versions can be compared to previous versions.

Test Suite

A Test suite is a sequence of tests that ensure all the features of your application are functioning as expected. An automated test suite runs all the tests automatically and gives you a pass/fail result for each test. Some test suites take hours and sometimes days to complete.

However, automated test suites are suitable since they can be repeated repeatedly without a human being manually clicking and typing through the application. In addition, automated tests stop false results from cropping up due to human error.

Top-Down Integration

This process involves testing the high level or the parent modules at the first level, then testing the lower level or child modules, and then integrating. Stubs, a small segment of the code, are used to simulate the data response of lower modules until they are thoroughly tested and integrated.

Traceability Matrix

In software development, a traceability matrix is a table-type document used to track requirements. In addition to forward tracing (from requirements to design or coding), it can also be used backward tracing (from coding to requirements). Alternatively, it is called Requirement Traceability Matrix (RTM) or Cross Reference Matrix (CRM).

Unit Test Framework

Software tools for writing and executing unit tests, including methods for building tests on a foundation and for executing and reporting results.

Unit Testing

Unit testing involves testing individual units or components of the software. Each software unit is validated to ensure that it performs as intended. Every software program has testable units. It typically has one or a few inputs and one output.

Use Case

A Use case describes how an actor or user uses the system. It is widely used to develop systems or acceptable level tests.


Verification refers to activities that ensure that software correctly implements a specific function. Verification is done against the design. It verifies that the developed software implements all the functionality specified in the design document.

White Box Testing

White Box testing tests a software solution's internal coding and infrastructure. It focuses primarily on strengthening security, the flow of inputs and outputs through the application, and improving design and usability. White box testing is also known as Clear Box testing, Open Box testing, Structural testing, Transparent Box testing, Code-Based testing, and Glass Box testing.