Software Testing Glossary: One-Stop Guide for 100+ Testing Terms

  • Learning Hub
  • Software Testing Glossary: One-Stop Guide for 100+ Testing Terms

CHAPTERS

  1. Overview
  2. A/B Testing
  3. API
  4. API Testing
  5. Acceptance Testing
  6. Acceptance Test Driven Development
  7. Accessibility Testing
  8. Actual Result
  9. Ad Hoc Testing
  10. Agile Development
  11. Agile Testing
  12. Alpha Testing
  13. Analytical Test Strategy
  14. Automated Testing
  15. Back-to-Back Testing
  16. Beta Testing
  17. Big Data Testing
  18. Big Bang Testing
  19. Black Box Testing
  20. Bottom-up Integration
  21. Build Automation
  22. BS 7925-2
  23. Bug
  24. Canary Testing
  25. CAST
  26. Chaos Engineering
  27. Chaos Testing
  28. Change Requests
  29. CI/CD Testing
  30. CMMI
  31. Code Coverage
  32. Code Review
  33. Code Reusability
  34. Coding Standards
  35. Compatibility Testing
  36. Component
  37. Component Testing
  38. Concurrency Testing
  39. Configuration Management
  40. Contract Testing
  41. Content Testing
  42. Context Driven Testing
  43. Continuous Testing
  44. COTS
  45. Cucumber Testing
  46. Cross Browser Testing
  47. CSS Testing
  48. Cypress Snapshot Testing
  49. Database Testing
  50. Data Driven Testing
  51. Data Flow Testing
  52. Debugging
  53. Decision Table
  54. Defect
  55. Defect Management
  56. Deliverable
  57. DevOps Testing
  58. Distributed Testing
  59. Driver
  60. Dynamic Testing
  61. End-To-End Testing
  62. Error
  63. Error Logs
  64. Emulator
  65. Execute
  66. Exhaustive Testing
  67. Expected Result
  68. Exploratory Testing
  69. Extreme Programming
  70. FAT
  71. Front-end Testing
  72. Functional Integration
  73. Functional Testing
  74. Future Proof Testing
  75. Game Testing
  76. Glass Box Testing
  77. Grey Box Testing
  78. Headless Browser Testing
  79. History
  80. IEEE 829
  81. Incident Report
  82. Incremental Testing
  83. Inspection
  84. Integration Testing
  85. Iteration
  86. Interface Testing
  87. Jest Testing
  88. JUnit Testing
  89. Keyword Driven Testing
  90. Key Performance Indicators (KPI)
  91. Load Testing
  92. Localization Testing
  93. Maintainability
  94. Maintenance Testing
  95. Manual Testing
  96. Mental Modes
  97. Microservices Testing
  98. Mobile App Testing
  99. Mobile Device Testing
  100. MTBF
  101. Mutation Testing
  102. Negative Testing
  103. Non-functional Testing
  104. NUnit
  105. Operational Testing
  106. OTT Testing
  107. Pair Testing
  108. Page Object Model (POM)
  109. Peer Testing
  110. Performance Indicator
  111. Performance Testing
  112. Postcondition
  113. Priority
  114. Python Visual Regression
  115. QA Metrics
  116. Quality
  117. Quality Assurance
  118. Quality Management
  119. Retesting
  120. Regression Testing
  121. Release Testing
  122. Reliability Testing
  123. Responsive Design
  124. Reviewer
  125. Requirement Analysis
  126. Requirements Management Tool
  127. RUP
  128. Sanity Testing
  129. Scalability Testing
  130. Scenario
  131. Screenshot testing
  132. Security Testing
  133. Selenium Grid
  134. Selenium IDE
  135. Selenium Python
  136. Service Virtualization
  137. Severity
  138. Shift-left
  139. Shift Right Testing
  140. Smoke Testing
  141. Software Risk Analysis
  142. Software Development Life Cycle (SDLC)
  143. Software Quality
  144. Software Quality Management
  145. Software Testing
  146. Software Testing Life Cycle (STLC)
  147. State Transition Testing
  148. Static Testing
  149. Stress Testing
  150. Structural Testing
  151. System
  152. System Testing
  153. Test Analysis
  154. Test Approach
  155. Test Automation
  156. Test Case
  157. Test Class
  158. Test Comparison
  159. Test Coverage
  160. Test Infrastructure
  161. Test-Driven Development (TDD)
  162. Test Data
  163. Test Design Specification
  164. Test Design Tool
  165. Test Environment
  166. Test Estimation
  167. Test Environment Management
  168. Test Execution
  169. Test Execution Automation
  170. Test Execution Schedule
  171. Test Execution Technique
  172. Test Execution Tool
  173. Test Harness
  174. Test Tool
  175. Test Log
  176. Test Management
  177. Test Monitoring and Test Control
  178. Test Observability
  179. Testing Methodologies
  180. Test Plan
  181. Test Process
  182. Test Process Improvement Checklist
  183. Test Process Improvement
  184. Test Policy
  185. Test Pyramid
  186. Test Report
  187. Test Runner
  188. Test Scenario
  189. Test Script
  190. Test Specification
  191. Test Strategy
  192. Test Suite
  193. Top-Down Integration
  194. Traceability Matrix
  195. User Acceptance Testing
  196. UI Testing
  197. Unit Test Framework
  198. Unit Testing
  199. Use Case
  200. Use Case Testing
  201. Usability Testing
  202. Validation
  203. Verification
  204. Visual Testing
  205. Visual Regression Testing
  206. White Box Testing
  207. Web Services Testing
  208. What is XCode
  209. Web Automation
  210. Web Application Testing
  211. Web Performance Testing
  212. Web Test Automation Tools
  213. WebDriver
  214. Web Testing
  215. XPath Query

OVERVIEW

When it comes to software testing, there is plenty of information available; it can be hard to know where to begin. If you're a novice in software testing, you've probably heard many unfamiliar acronyms and jargon. In order to expand your professional vocabulary, learning different testing terminologies is crucial.

This guide on software testing glossary covers some of the basic definitions for software testing and quality assurance commonly used by QA testers.

Let's begin!

A/B testing or split testing creates at least one variant to test against a current webpage to ascertain which one performs better regarding agreed metrics such as revenue per visitor (for e-commerce websites) or conversion rate.

API

Application Programming Interface, or API, refers to the interface through which two applications communicate. The word Application in this context refers to any software with a distinct function. An API contract defines how the two applications communicate through requests and responses.

The process of verifying and validating an API's functionality, reliability, performance, and security is known as API testing. It entails submitting requests to an API and examining its results to ensure that desired results are met. It aids in identifying issues like improper data formatting, invalid inputs, inadequate error handling, and unauthorized access and can be carried out manually or with the help of automated tools.

The potential end-user or customers perform acceptance testing to check if the software meets the requirements and can be accepted for end-use.

Acceptance Test Driven Development

Acceptance Test Driven Development (ATDD) is a software development methodology that helps you to reduce the risk of defects and ensure that your application meets quality standards by incorporating testing as an integral part of the development process.

Accessibility testing makes your mobile and web apps used by as many people as possible. This includes people with disabilities such as vision impairments, hearing limitations, and other physical or cognitive conditions.

Actual Result

The actual result, also known as the actual outcome, is the result a tester receives after performing the test. The actual result is documented along with the test case during the test execution phase. After completing all tests, the actual result is compared with its expected outcome, and any deviations are noted.

Ad hoc testing is a type of informal, unstructured software testing that attempts to break the testing process in order to detect any vulnerabilities or flaws as quickly as possible. It is carried out randomly and is usually an unplanned activity that does not adhere to test design principles or documentation when creating test cases.

Agile software development is a process of software development based on iterative development. In this method, requirements and solutions involve collaboration between self-organizing cross-functional teams.

Agile software development is based on repetitive and parallel development, where requirements are chalked and solutions are implemented through collaboration between cross-functional teams.

The Agile testing practice works according to the rules and principles of the Agile software development process. Unlike the Waterfall approach, it begins at the beginning of the project, with development and testing running simultaneously. With the Agile testing approach, development and testing teams work closely together to accomplish different tasks.

The Alpha testing is a software testing method that detects bugs in the product before it is released to real users or the public. It is also referred to as alpha testing because it is done early in the development process, near the end before beta testing begins.

Analytical test strategies are used to identify potential problems in the test basis before the test is executed; they require an upfront analysis of the test basis.

Automated testing is a form of testing that uses scripts to perform repetitive tasks, thus enhancing the performance of the software and efficiency. Test Automation is the best way to increase the effectiveness, test coverage, and execution speed in software testing.

...

Back-to-Back Testing

Back-to-back testing is a type of comparison testing conducted if there are two or more variants of components with similar functionality. The aim is to compare the results and check for any divergences in work.

Beta testing is external user acceptance testing and is the last testing performed before a product is released to the audience. In beta testing, a nearly completed version of the software, the beta version, is released to a limited number of end-users for testing.

This beta testing method is performed to gain feedback on accessibility, usability, reliability, functionality, and other aspects of the developed software.

Big Data Testing is a specialized quality assurance process aimed at verifying and validating the quality and reliability of large volumes of data within big data systems. It ensures that data processing, storage, and retrieval functions operate correctly, enabling organizations to make informed decisions based on accurate and trustworthy data in their big data applications.

Big Bang Testing

Big Bang Testing is a strategy for system testing that links all units together at once without consideration of how they interact. This makes it difficult to isolate errors because attention must be paid to individual units' interfaces.

Black box testing involves examining the software without knowing its internal structure. Often, it refers to functional or acceptance testing, although the latter is known as white box testing or clear box testing. Anyone can perform Black box testing, independent of the development team, and a developer's familiarity with the code should not affect testing quality.

Bottom-up Integration

In bottom-up integration testing, every lower-level module is tested with higher modules until all modules are tested. Then, it takes the help of Drivers for software testing.

Build Automation refers to the process of automating the compilation, assembly, and deployment of source code into finished products or executable programs. This practice streamlines and accelerates the software development process, minimizing manual intervention and potential errors

BS 7925-2

BS 7925-2 is a Software Component Testing Standard. This standard describes the process for component testing using specific test-case designs and measurement systems. This will ensure improvement of the software testing quality and, in turn, improve the quality of the software products.

Bug

A bug is a problem causing a program to crash or produce invalid output. The problem is caused by insufficient or erroneous logic. A bug can be an error, mistake, defect, or fault, which may cause failure or deviation from expected results.

Canary Testing

Canary Testing is a technique used to detect any issues or bugs and minimize the risk of introducing new changes or updates to a production environment. It is often used in conjunction with A/B testing, where multiple versions of a feature or change are released to test among a group of people. By comparing the performance and feedback of different versions, developers can refine and improve the new feature before it is made public.

CAST

The CAST Certification exhibits a basic-level understanding of quality testing principles and practices. In addition, acquiring the designation of Certified Associate in Software Testing (CAST) demonstrates a professional level of expertise in the principles and practices of software testing in the IT profession.

Chaos engineering is a method of testing software that introduces random failures and faults to verify its resilience in the face of unpredictable disruptions. These disruptions cause applications to fail in ways that are difficult to anticipate and debug. Chaos engineers ask why.

Chaos testing involves purposefully injecting faults or failures into your infrastructure to test your system's ability to respond when those failures occur. This method is an effective way to practice disaster recovery procedures and prevent downtime and outages from happening.

Change Requests

Change requests come from stakeholders in the software development process who want to change something in a product or production method. Common change requests include defects and requests for product enhancements or new features.

CI/CD Testing, which stands for Continuous Integration/Continuous Deployment Testing, is a vital component of modern software development. It involves the systematic and automated testing of software applications at various stages of the development pipeline.

CMMI

The CMMI or Capability Maturity Model Integration is a structured collection of best practices in engineering, service delivery, and management. It aims to assist companies in improving their ability to deliver customer satisfaction through an ever-increasing understanding of their capabilities.

The framework organizes practices by the effectiveness of the practice itself ("capability") and by the implementation of structured combinations of effective practices within the organization ("maturity").

Code coverage is a widely-used metric that can help you understand how much of your source is tested. It's an important metric that can help you assess the quality of your test suite. Code coverage is one form of white box testing, which finds areas of a program that were not executed during testing.

Code reviews, also known as Peer reviews, are a crucial part of any development process. They help ensure the quality of the code base, expose bugs, and provide developers with valuable experience.

Code Reusability is a programming practice that involves writing code in a way that it can be used again in different parts of a software application or in various projects, without modification. This approach enhances efficiency, reduces errors, and shortens development time, as developers can leverage pre-existing, tested code.

Coding Standards are a set of guidelines and best practices used by programmers to write code that is easy to read, maintain, and understand. These standards ensure that code is consistently formatted and structured across different developers and projects, facilitating smoother collaboration and more efficient code review processes.

Compatibility Testing ensures whether the software will run on different hardware, operating systems, applications, network environments, or mobile devices. It is performed on applications once they have become stable. Compatibility testing prevents future issues regarding compatibility, which is essential from a production and implementation standpoint.

Component

A software component is a composition unit with only contractually specified interfaces and explicit context dependencies.

Component testing validates the usability of each component of the software application. The behavior of each component is also determined along with the usability testing. Component testing requires each component to be in an independent state and controllable.

Concurrency Testing

Concurrency testing, also known as multi-user testing, is a form of software testing performed on an application with multiple users logging in simultaneously. It helps identify and measure problems associated with concurrencies, such as response time, throughput, locks/deadlocks, and other issues.

Configuration management is an engineering process for maintaining the consistency of a product's attributes throughout its life. In the technology world, configuration management is an IT management process that tracks individual configuration items of an IT system.

The contract testing technique ensures that applications work together by checking each application in isolation to ensure the messages it sends or receives conform to a shared understanding.

Content Testing

Content testing ensures that your website's target audience can find, understand, and comprehend your content. It starts early in the UX process to ensure that new content is implemented to maximize comprehension and usability.

Context Driven Testing

Context-driven testing is an approach to software testing that emphasizes the importance of considering the specific context of a project when designing and executing tests. Context-driven testers don't rely on a one-size-fits-all strategy for testing since each project is unique and requires a customized strategy.

Continuous testing provides feedback on business risks as early as possible before releasing a newly developed product. Organizations can use it to ensure that applications remain secure and effective in complex, fast-paced environments through test automation.

COTS

Commercial Off-the-Shelf (COTS) software is becoming an ever-increasing part of organizations' comprehensive IT strategy for building and delivering systems. However, a common perception held by many people is that since a vendor developed the software, much of the testing responsibility is carried by the software vendor.

Cucumber Testing is a software testing approach that supports Behavior Driven Development (BDD). It allows developers and testers to write test cases in plain English, making the testing process easily understandable even for individuals without a technical background.

Cross Browser Testing allows you to verify your application's compatibility with different browsers. It's essential to any development process because it ensures that your product works for all users, regardless of their browser preferences.

CSS Testing

CSS testing is a software testing method that ensures the correctness and consistency of Cascading Style Sheets (CSS) used in web applications and websites. CSS testing can be performed manually or through automated tools, including techniques such as visual testing and regression testing to ensure consistency across different platforms and browsers.

Cypress Snapshot Testing

Cypress Snapshot Testing refers to a testing methodology used within the Cypress testing framework, a popular JavaScript-based end-to-end testing tool. Snapshot testing in Cypress captures the state of a web application's UI at a specific point in time and compares it to a "snapshot" taken earlier. This approach is particularly useful for ensuring visual consistency and detecting unintended changes in the UI over time.

Database testing is a type of software testing that focuses on validating the databases and ensuring data integrity, consistency, and security. It involves rigorously testing the schema, tables, triggers, stored procedures, and other database components. The primary goal is to ensure that the database systems work as expected, handle data correctly, and interact efficiently with applications.

Data driven testing is a method of creating test scripts in which test data or output values are read from data files rather than using the same hard-coded values. To achieve greater coverage from a single test, you can run the same test case with different inputs.

Data Flow Testing

Data flow testing is a form of structural testing that locates test paths in a program according to the locations of definitions and uses of variables.

Debugging

Debugging is the process of fixing software errors, which begins after the program fails to execute correctly and ends by solving the problem and successfully testing the software. Fixing errors must be made at all stages of debugging, which can be highly complex and tedious.

Decision Table

A decision table is an excellent tool for testing and requirements management. It is a structured effort to break down the requirements when working with complex rules. Decision tables are implemented to model complicated logic. It helps show all possible combinations of conditions to be considered and what conditions are missed.

Defect

A defect is a discrepancy between expected and actual results in testing. It is a deviation from the customer's requirement. An error is found in the product itself after being shipped to the customer.

Defect management involves reducing and identifying bugs in the earlier stages of the software development lifecycle and mitigate their impact. Effective defect management process help in deploying bug-free software applications.

Deliverable

A deliverable is something that's delivered. In software engineering, that's usually code or documentation. Of course, plenty of work makes the deliverable possible, but it isn't itself a deliverable, such as testing modules or researching the best way to do something.

DevOps testing is a process of automating and streamlining your software's delivery lifecycle. Many companies employ DevOps testing strategies, starting with the agile practice of Continuous Integration.

Distributed testing is a methodology used in the software development process where multiple systems or components, often located in different geographical locations, work in parallel to execute tests. This approach is particularly useful for large-scale projects and applications that require testing under varied conditions and environments.

Driver

A software driver is system software that controls the hardware device it is installed in. Hardware devices can be PCs, tablets, smartphones, and so on. It helps different hardware components attached to these devices to communicate with the operating system and other applications so that the components can function.

Software testing that involves executing a system or software application and monitoring its behavior in real-time to assess its quality, performance, and functionality is known as dynamic testing. Dynamic testing's goals include finding software application flaws, bugs, and errors as well as confirming that it complies with all requirements and quality standards.

End to End testing is a software testing technique that scrutinizes the functioning of a software/application from the start to the end. It dissects the overall flow of the software and how it functions in different environments. It also checks if the application flow is as expected.

Error

An error is a difference between what the software is supposed to do or is expected to do and what the software does. A defect in the software can result in erroneous behavior.

Error Logs

Error logs are computer files documenting critical errors occurring when an application, operating system, or server is operating. Error logs contain entries on various topics, such as table corruption and configuration corruption. They can be handy for troubleshooting and managing systems, servers, and even networks.

Emulator

The purpose of an emulator is to enable one computer system to behave like another computer system. Emulators typically allow the host system to run software or use peripheral devices designed for the guest system.

Execute

Test execution is simply performing (executing) the tests to verify specific functionality. This could be either manual test execution - where a person follows all the steps documented in the test cases. Or Automated test cases - where the command is given to execute (run) the steps using an automation testing tool.

Exhaustive Testing

Exhaustive testing is a thorough process that considers all possible combinations of inputs, usage scenarios, and random situations to ensure that the product cannot be destroyed or crashed. Error logs help developers identify problems, find solutions and determine the root causes of user issues.

Expected Result

Test execution is simply performing (executing) the tests to verify specific functionality. This could be either manual test execution - where a person follows all the steps documented in the test cases. Or Automated test cases - where the command is given to execute (run) the steps using an automation tool.

Exploratory testing combines the tester's experience with a structured testing approach used during testing phases of intense time pressure. It involves concurrent test case design and execution of an application under test.

Extreme Programming, commonly known as "XP" based on the initials extreme programming —is an agile method focused on software development. While scrum focuses on prioritizing work and getting feedback at the project management level, XP focuses on software development best practices. Therefore, there will be a lot of software references in the following discussion. In addition, XP values and practices can be applied to other knowledge work projects.

FAT

Factory Acceptance Testing (FAT) is used to verify if the newly manufactured and packaged equipment meets its intended purpose. In addition, the FAT verifies the system's functioning and ensures the customer's requirements have been met.

Front-end testing is a type of testing that involves checking the user interface (UI) and how it interacts with the other layers in an application. It's also called "browser testing," "front-end validation," and "functional testing."

Functional Integration

Functional Integration associates products and services with an ecosystem that attracts and retains customers.

Functional testing verifies that each software application function operates according to the requirement. This testing primarily involves black box testing and is not concerned about the application's source code.

Future Proof Testing

A software application must be planned and created to be compatible with changes in technology, operating systems, and hardware platforms to pass future-proof testing. It involves thinking ahead to potential changes in the future and designing the application to be easily adaptable to these changes without requiring extensive redesign or redevelopment.

Game Testing is a critical process in game development where professionals meticulously evaluate a game's performance, functionality, and overall quality. This process involves identifying and documenting bugs, glitches, and other issues that could hinder the gaming experience.

Glass Box Testing

Glass box testing is a program testing technique that examines the program's structure and derives test data from the program's logic. The other names for glass box testing are clear box testing, open box testing, logic-driven testing, or path-driven testing.

Grey Box Testing

In grey box testing, a tester is given partial knowledge of the internal structure of an application. The purpose of gray box testing is to search for and identify defects due to improper code structure or improper use of applications.

Headless browser testing refers to running browser tests without the graphical user interface (GUI) typically associated with web browsers. This type of testing is particularly useful for automating web application testing, as it allows for faster execution of tests and is ideal for environments where a display screen, mouse, or keyboard is unnecessary.

History

The brief information of all changes that happened to a test help users identifies the root cause of an error when such occurs.

IEEE 829

An IEEE 829 is a Software Test Documentation standard that specifies the syntax for the documents to be used in the different testing life cycle.

Incident Report

An incident report is a detailed description of the incident observed and contains data like Summary, Steps Used, Priority, Severity, No. of Test Cases Impacted, Status, Assigned To, etc. An incident report is essential as it helps keep track of the incidents and provides information to concerned people.

Incremental testing is a method of integration testing performed after unit testing to test a program's modules. It uses several stubs and drivers to isolate the modules one by one and reveal any errors or defects in each module.

Inspection

Inspection attributes to peer review of any work product by trained individuals who look for defects using a well-defined process. An inspection might also be called a Fagan inspection after Michael Fagan, the creator of an overall software inspection process.

...

Integration testing occurs after unit testing. It checks for defects in the interactions between integrated components or units. The focus of integration testing is to expose defects at the time of interaction between integrated components or units.

Iteration

Iterative testing is making small, gradual changes or updates to a product based on insights like test results and user feedback from earlier changes and testing them against predefined baseline metrics.

Interface Testing

Interface testing is a form of software testing that verifies the correct communication between two applications. The term interface refers to the connection that integrates two components. APIs, Web services, and many other interfaces are found in the computer world. Testing these interfaces is known as Interface Testing.

Jest is a JavaScript unit testing framework created by Meta. It is most commonly used for writing unit tests, which are used to test individual code functions.

JUnit is a Java testing framework that allows developers to write and execute automated tests. In Java, test cases must be re-executed every time new code is added to ensure nothing in the code is broken.

Keyword driven testing is a type of functional testing that isolates test case design from test development. It is a collection of keywords that you can reuse within the same tests. A keyword is the combination of a user's action on a test object that describes test steps, making test cases easier to understand, automate and maintain.

A Key Performance Indicator (KPI) is a measurable value demonstrating how effectively a company, organization, team, or individual achieves key business objectives. KPIs are used to evaluate success at reaching targets and are integral in business and project management.

Load Testing is a way to determine how well a system, software product, or application can handle multiple users using it at one time. It determines the behavior of the application under real-life conditions.

Localization testing is software testing that ensures a product is culturally responsive to the needs of the people in a specific region. Localization testing ensures that the application can be used in that particular region.

Maintainability

Maintainability refers to the ability to update or modify the system under test. This is an important parameter as the system is subjected to changes throughout the software life cycle.

Maintenance testing is an essential aspect of any quality assurance program, as it allows you to identify equipment problems, diagnose equipment problems, or confirm that repair measures have been effective.

Manual testing involves verifying whether the functionalities are working as expected or not.

Mental models are frameworks or representations that people construct to understand and interpret the world around them. These models are deeply personal and subjective, shaped by individual experiences, education, cultural background, and other influences. They help simplify complex realities, predict outcomes, and guide decision-making processes.

Microservices testing combines QA activities to ensure that each microservice works appropriately. It ensures that its failure doesn't result in severe functional disruptions of the entire software and that all microservices smoothly function as one application.

Mobile app testing is the process of validating a mobile application before it is released publicly. Testing mobile apps help ensure the app meets technical and business requirements.

Mobile Device Testing

Mobile Device Testing is the process by which a device is tested for its qualities to ensure that it meets the requirements for which it was developed.

MTBF

Meantime between failures (MTBF) calculates the average time between failures of a piece of repairable equipment. It can estimate when equipment may fail unexpectedly in the future or when it needs to be replaced.

Mutation Testing

Mutation testing is a software testing technique used to evaluate the quality of existing software tests. It involves modifying a program in small ways, creating mutant versions of a program, and assessing the original version's ability to detect these mutants.

Negative testing is a software testing approach that ensures an application's source code and corresponding functionality are fully functional and can handle all input. Invalid data is inserted to compare the output against the given input.

Non-functional testing is a testing term that capsulizes various testing techniques to evaluate and assess the non-functional attributes of a software application. This testing methodology's primary purpose is to evaluate an application's competency and effectiveness. In addition, non-Functional testing is required to check the system's non-functional requirements like usability, etc.

NUnit is a popular open-source unit testing framework for C#. It is ported from the JUnit framework and aids in writing tests using the .NET language. Batch execution of tests can be performed through the NUnit-console.exe console runner, which helps load and explores tests with the help of the NUnit Test Engine.

Operational testing confirms that a product, system, service, and process meets operational requirements. Operational requirements include performance, security, stability, maintainability, accessibility, interoperability, backup, and recovery. It is a type of non-functional acceptance testing.

OTT testing is testing a content provider's video, data, voice, and capabilities available on the Internet. It is crucial to ensure customer experience, network speed, security, and connectivity. Many application components, networks, and infrastructure setups are linked to providing an effective OTT service.

Pair Testing

Pair testing is a collaborative effort versus a single-person testing effort. Typically, one of the team members is a tester, and the other is either a developer or a business analyst.

The Page Object Model is a design pattern that creates an object repository for storing all web elements. It helps reduce code duplication and improves test case maintenance.

Peer Testing

In software development, peer testing is a way of evaluating the work a co-worker performs. The developers must be at par with each other, or the code will fail to run correctly. The peer review technique is adopted in many other professions, too: It involves a team effort, which is productive in pursuing a common goal.

Performance Indicator

A performance indicator, also known as a Key Performance Indicator (KPI), is a type of performance metric used by testers to evaluate the effectiveness and performance of testing.

performance testing analyzes the quality and capability of a product. It is used to determine how well a system performs under varying workloads and how it will handle future demands on its functionality.

Postcondition

Postcondition is a requirement that must be true right after the execution of some section of code. Postconditions are sometimes tested using predication within the code itself.

Priority

Priority is the order/importance of an issue/test case based upon user requirements, while severity is the impact of issue/failure of the test case will have on the system. Typically, priority is decided by the business analyst/client, and the tester decides severity as they have seen the impact on the system. This may or may not be followed everywhere.

Visual regression testing in Python is a quality assurance process used to ensure that the visual aspects of a web application or UI component remain consistent across different versions or after changes have been made. This type of testing is crucial for identifying unintended visual modifications that might not affect the functionality but could degrade the user experience or the UI's intended design.

QA Metrics

QA metrics are processes that software developers use to improve the quality of their products by improving testing. These quality assurance metrics can help predict or observe flaws in a product before it becomes available to consumers.

Quality

Quality refers to the conformance to implicit or explicit requirements, expectations, and standards. To fulfill these requirements, a quality control mechanism is set up. Quality Control (QC) is how you achieve or improve product quality.

QA testing, or quality assurance, is the process of ensuring that the product or service provided to customers is of the best possible quality. QA focuses on improving processes for delivering quality products.

Quality management ensures that the products or services a company creates meet a certain level of quality.

Retesting refers to the process of repeating certain tests on a software application after it has undergone changes or modifications. The purpose of retesting is to ensure that the changes made to the software have not introduced any new defects or bugs and that the previously identified defects have been properly fixed.

Regression testing involves changes to product/software to ensure that the older functions/programs still work with the newly added changes. Regression testing is an integral part of the program development process and is done by code testing specialists.

Release testing tests a new software version (project or product) to verify that the software can be released. Release testing has a broad focus since the full functionality of the release is under test. Therefore, the tests included in release testing are strongly dependent on the software itself.

Reliability Testing is a technique of testing a software's ability to function under environmental conditions and is used to find issues in the software's design and functionality.

Responsive design is a UI development approach that generates dynamic changes to the website's appearance based on the screen size and device orientation. It ensures that the content and screen size are compatible with each other.

Reviewer

Reviewers are the domain experts who methodically assess the code to identify bugs, improve code quality, and help developers learn the source code. Two or more experts should review the code if the code covers more than one domain.

Requirements analysis is a crucial phase in project management and software development, involving the systematic examination and documentation of project or system needs. This process helps define clear objectives, scope, and specifications, ensuring that all stakeholders have a shared understanding of what needs to be achieved.

Requirements management tools manage requirements, communicate those changes to stakeholders, and control new or modified requirements.

RUP

RUP is a software development process developed by Rational, a division of IBM. It divides the development process into four phases: business modeling, analysis and design, implementation, testing, and deployment.

Sanity testing is an important part of regression testing, which is performed to ensure that code changes are working properly. Sanity testing stops the build if there are problems with the code.

Scalability testing is validating that a software application can be scaled up or scaled out in terms of its non-functional capabilities. Software quality analysts often group performance, scalability, and reliability.

Scenario

A scenario is one usage example. A piece of software can probably be used for more than one particular thing. Each specific thing some software can be used for can be described with a concrete example. These examples are often referred to as scenarios.

Screenshot testing

Screenshot testing is a method of automated testing that checks the visual appearance, layout, and other details of a web page or application. Screenshot testing can be used to detect visual regressions--unintended changes that occur while developing or deploying an application and other problems by comparing images taken of the page with a baseline image.

Security Testing seeks to uncover all possible loopholes and vulnerabilities in the software system that might result in a loss of information, revenue, or reputation at the hands of employees or outside parties.

Selenium Grid is a part of the Selenium Suite that specializes in running multiple tests across browsers, operating systems, and machines in parallel. It is used to speed up the execution of a test suite by using multiple machines to run tests in parallel.

Selenium IDE is an extension of your testing environment, providing additional tools for logging in, searching for items, and interacting with the user interface.

Selenium with Python refers to using the Selenium WebDriver for automating web browser interaction using Python as the scripting language. Selenium is a popular tool for automating web browsers, allowing developers and testers to simulate user interactions with web pages. With its simplicity and readability, Python is a popular choice for writing Selenium scripts.

Service Virtualization is a testing practice that simulates the behavior of specific components in heterogeneous component-based applications. This technique enables developers and testers to work in a stable and isolated environment, replicating the behaviors of unavailable or costly-to-involve systems.

Severity

Severity is defined as the measurement of a defect's impact on the system of the application/unit being tested. A higher impact on the system functionality will lead to assigning higher severity to the defect. The Quality Assurance Engineer usually determines the severity of the level of defect.

The shift left test strategy involves moving the test to the beginning of the software development process. You may reduce the number of errors and enhance the quality of your code by testing your project early and often. The idea is to avoid discovering critical issues when your code will need to be patched at the deployment phase.

Shift-right testing is a software testing approach emphasizing testing in later stages of the development lifecycle, including post-release. It complements the traditional shift-left approach, which focuses on testing early in development. Shift-right testing involves monitoring and testing the software in production environments to gather feedback from real-world use.

Smoke testing helps you determine whether the most critical functions of the software applications are working as intended. It identifies mission-critical issues at the earliest, so you can fix them before delving into finer details.

A software risk analysis examines code violations that could threaten the software's stability, security, or performance.

Software Development Life Cycle (SDLC)

The Software Development Life Cycle is a methodology for developing software that involves planning, implementing, testing, and releasing the product. The SDLC ensures that your application meets quality standards, is delivered on time and within budget, and meets changing user needs throughout its lifecycle.

Software quality can be defined as the software's ability to meet the user's requirements outlined in the SRS (Software Requirement Specifications) documentation. A high-quality software application meets the end-user specifications; it is maintainable, developed on time, and within budget.

A software quality management approach aims to establish and manage software quality to ensure that a software application fulfills every expected quality standard set out by the end-user. It also consider the necessary regulatory and development needs and requirements.

Software testing evaluates and verifies that a software product or application works as expected, performs as intended, and contains no errors.

The Software Testing Life Cycle is a methodology that describes the different steps and tasks involved in testing software applications. Planning, requirements analysis, test design, execution, and reporting are all systematically covered by the STLC. By doing so, it facilitates the identification and mitigation of risks, enhances teamwork, and guarantees that the software application achieves its goals.

State Transition Testing

State Transition testing is a black-box testing method implemented to observe the system's behavior for different input conditions passed in series. Both positive and negative input values are given in this testing, and the system's behavior is observed.

Static Testing is a software testing technique performed early in the development cycle without executing the code. It involves reviewing and analyzing software artifacts such as requirements, design documents, source code, and other documentation to identify defects and improve the quality of the software product. Static It can be performed manually, through code reviews, walkthroughs, and inspections, or using automated tools that analyze source code and identify potential issues.

Stress testing determines the stability or robustness of a system of a given system, critical infrastructure, or entity.

It's testing used to test the structure of software coding. The process combines white-box testing and glass box testing, performed mainly by developers. The testing process intends to determine how the system works, not its functionality. Specifically, if an error message pops, there will be an issue. Structural testing helps to find that issue and fix it.

System

A system is a set of components formed for a common purpose. The word sometimes describes the organization or plan itself and sometimes describes the parts in the system (as in "computer system").

...

System testing involves validating how the different components of a software application interact in a fully integrated system. It is performed on the entire system in accordance with either functional or design requirements. With system testing, you can find flaws and gaps in the overall functionality of a software application.

System integration testing is a comprehensive technique to test software applications, including the entire system. It ensures that the functional and hardware aspects of the software are in synchronization.

Test Analysis refers to the systematic examination and evaluation of test data or test results to identify patterns, trends, and insights. This process is crucial for understanding the effectiveness and quality of testing procedures, identifying areas for improvement, and ensuring that testing objectives are met efficiently.

A test approach is a testing strategy that defines how testing will be carried out and the specific tasks that need to be done to carry out a particular project.

Test automation tools are used to develop and execute a variety of tests and compare the actual results against the expected results. It can be used for manual processes or as part of a continuous integration system.

A test case is a fully documented specification of the inputs, execution conditions, testing procedure, and expected results for one possible outcome of a particular test. Test cases ensure that all areas of the program have been evaluated and that no errors were missed during testing.

Test classes are code snippets that ensure the Apex class they test is functioning correctly.

Test Comparison

Test comparison involves comparing test data of the previously run tests.

Test coverage is a metric software tester uses to gauge how much of the program's code has been tested. To determine this, the tester records which sections of the program are executed when a test case is run and uses this information to establish whether conditional statement branches have been taken.

A test infrastructure consists of software and hardware dependencies that ensure software applications run smoothly. It also helps to reduce the failure risk associated with software applications. The test infrastructure includes testing activities and methods that ensure the fastest test execution, resulting in a shorter release cycle and quicker time to market.

Test-driven development (TDD) is a transformational approach to development that combines test-first development. You write a test before writing just enough production code to fulfill that test and refactoring.

Test data is input data to the system or software application used for testing it. We can differ test data to test the application to handle error conditions correctly. Therefore, QA should always provide different test data to test the application thoroughly.

A test design specification is a detailed plan that defines the testing approach and the features to be tested. It also includes requirements, test cases, and procedures necessary to accomplish the testing and specifies what constitutes success or failure.

Test design tools can assist in creating test cases or at least test inputs. If an automated oracle is available, the tool can also make the expected result and thus generate test cases.

A Test Environment is a setup for the testing teams to execute test cases. In other words, it supports test execution with hardware, software and network configured. The testbed or test environment is configured as per the need of the Application Under Test.

Test Estimation, in the context of project management and software testing, is the process of predicting the time, effort, and resources required to complete testing activities. It involves careful analysis and assessment to provide accurate forecasts for planning and budgeting, ensuring that testing is conducted efficiently and effectively within the project timeline.

Test Environment Management (TEM) is a practice that involves creating, maintaining, and controlling test environments in the software testing process. A test environment is a setup where software and hardware configurations are prepared to mirror the production environment as closely as possible, providing a space for accurate and realistic testing.

Test execution involves running the test cases of software applications to ensure they satisfy the pre-defined user requirements and specifications. It is an essential facet of the Software Testing Life Cycle (STLC) and Software Development Life Cycle (SDLC). Test execution commences with the completion of the test planning phase.

Test Execution Automation

Test execution can be performed using an automation testing tool directly, or it can be achieved through a management tool that invokes the automation tool. Once the process finishes, the test report provides a consolidated summary of testing performed for the project.

Test Execution Schedule

A test execution schedule allows you to run steps sequentially at a scheduled time or when triggered by a build completion.

Test execution techniques include planning, strategies, and tactics for improving test execution. These techniques have an impact on the test execution but not on the actual “running of the tests.”

A test execution tool validates the completed software application against a specific test case scenario and compares the test results to the expected outcome and post conditions. As they record or capture manual tests, therefore, also referred as capture/playback tools, capture/replay tools, or record/playback tools.

Test Harness

Test Harness is a collection of supporting tools, such as stubs and drivers, used during software testing. It executes tests by using a test library and generates test reports.

A test tool is a product that helps in one or more test activities, such as test planning, requirement gathering, building, and running tests, defect tracking, and test analysis. You can identify the input fields, including the range of valid values, using test management or a Computer Aided Software Engineering (CASE) tool.

A Test Log is one of the crucial test artifacts prepared during testing. It provides a detailed summary of the overall test run and indicates the passed and failed tests. Additionally, the test log contains details and information about various test operations, including the source of issues and the reasons for failed operations. The focus of this report/document is to enable post-execution diagnosis of failures and defects in the software.

Test management includes managing and monitoring the testing processes, documents, and other facets of a software application. It ensures that the software applications are of high quality and have undergone extensive testing.

Test Monitoring and Test Control are two critical aspects of the software testing process, playing a vital role in ensuring the quality and effectiveness of the testing efforts. They are part of the test management process, essential for coordinating and controlling testing activities during the software development lifecycle.

Test Observability

Test observability refers to the ability to observe and analyze the behavior and performance of a system or application during testing to detect and diagnose issues and failures. It involves collecting and analyzing data from various sources, such as logs, metrics, and traces, to gain insights into the system's behavior and identify areas for improvement.

Testing Methodologies encompass various approaches and strategies used to test and ensure the functionality, reliability, and stability of software applications. These methodologies guide testers through the process of planning, executing, and evaluating tests to identify bugs and issues before a product reaches the end-users.

A Test plan is a document to describe the testing objective and activities. The test lead prepares it, and the target audience is the Project manager, project team, and business(depends). The test plan clearly states the testing approach; Pass/Fail criteria, testing stages, automation plan(if applicable), suspension, resumption criteria, training, etc. It also includes the Testing Risk and Contingency plan.

In order to make sure that a software application meets the requirements and quality standards specified, a set of activities and tasks known as the test process are carried out in a systematic manner. It entails preparing for tests, creating them, carrying them out, and reporting on them.

A test process improvement checklist is a structured tool used in quality assurance and software testing to assess, track, and enhance testing procedures systematically. It helps teams identify areas for improvement, streamline processes, and ensure that best practices are consistently followed to achieve higher testing efficiency and reliability.

Test Process Improvement

Test Process Improvement Assessments give an independent and objective assessment of how well an organization performs its testing activities compared to the industry standard.

Test Policy

Test policy is a document that describes the ways in which an organization plans to test its products. It is determined by senior management at the organization and defines the principles that govern its testing activities.

A Test pyramid is a type of software development framework that can help developers reduce the time it takes to detect if changes affect existing code. It provides a framework for evaluating the types of tests executed in an automated test suite.

A Test report is a brief of objectives, activities, and test results. It is managed to help stakeholders understand product quality and decide whether a product, feature, or defect resolution is on track for release.

Test Runner is a tool that automates the execution of test cases and collects their results. It is frequently used in software testing to ensure that applications operate correctly and are bug-free. Test runners can be graphical user interfaces (GUIs) or command line tools.

A Test Scenario is a high-level description of what needs to be tested in the software application to ensure its correct functionality and performance. It outlines a specific flow of actions or events, providing a clear picture of how a feature or function should work under various conditions.

A Test Script is a set of instructions or lines of code written to verify if a software application functions as expected. It is a crucial component in automated testing, where it helps in executing a series of tasks and comparing the expected with the actual outcome.

Test Specification

Test specifications are iterative, generative drafts of test design. It allows test developers to develop new versions of a test based on different populations at the item level. In addition, the specs serve as guidelines so that new versions can be compared to previous versions.

Test strategy outlines the approach used to test a particular software application. A good test strategy will define the exact process the testing team will follow in order to achieve the organizational objectives from a testing perspective.

A Test suite is a sequence of tests that ensure all the features of your application are functioning as expected. An automated test suite runs all the tests automatically and gives you a pass/fail result for each test. Some test suites take hours and sometimes days to complete.

However, automated test suites are suitable since they can be repeated repeatedly without a human being manually clicking and typing through the application. In addition, automated tests stop false results from cropping up due to human error.

This process involves testing the high level or the parent modules at the first level, then testing the lower level or child modules, and then integrating. Stubs, a small segment of the code, are used to simulate the data response of lower modules until they are thoroughly tested and integrated.

In software development, a traceability matrix is a table-type document used to track requirements. In addition to forward tracing (from requirements to design or coding), it can also be used backward tracing (from coding to requirements). Alternatively, it is called Requirement Traceability Matrix (RTM) or Cross Reference Matrix (CRM).

User acceptance testing, also known as end-user testing, is a valuable opportunity for the customer to test the software in its intended environment before it is released, ensuring that the final product meets customer expectations.

UI testing, also known as user interface testing, validates the UI of the web application to ensure it works smoothly or if there is any glitch that compromises user behavior and does not meet the defined requirements.

Unit Test Framework

Software tools for writing and executing unit tests, including methods for building tests on a foundation and for executing and reporting results.

Unit testing involves testing individual units or components of the software. Each software unit is validated to ensure that it performs as intended. Every software program has testable units. It typically has one or a few inputs and one output.

Use Case

A Use case describes how an actor or user uses the system. It is widely used to develop systems or acceptable level tests.

Use case testing helps you identify all the possible ways users will interact with your software from start to finish. Use cases are often used to test a program's error-handling capabilities and its overall robustness.

Organizations employ usability testing to gain firsthand insight into how people interact with a software application. It is a qualitative research approach that helps in the identification of usability issues and the evaluation of whether the software is user-friendly.

Validation Testing

Validation testing is studying and verifying the specific requirements of a particular development stage to ensure that the final product meets customer expectations. It does not require executing code, but it can be used to verify that the code works as expected.

Verification

Verification refers to activities that ensure that software correctly implements a specific function. Verification is done against the design. It verifies that the developed software implements all the functionality specified in the design document.

Visual testing is a quality assurance process that systematically compares the appearance of web pages or applications across different environments and screens to ensure consistent and accurate visual presentation.

Visual regression testing is a kind of software testing in which the appearance and usability of the user interface are checked after a code change. Visual regression testing ensures that new code does not affect existing functionality.

White box testing validates a software solution's internal coding and infrastructure. It focuses primarily on strengthening security, the flow of inputs and outputs through the application, and improving design and usability. White box testing is also known as Clear Box testing, Open Box testing, Structural testing, Transparent Box testing, Code-Based testing, and Glass Box testing.

Web Services Testing involves validating web services and their interactions to ensure their functionality, reliability, performance, and security. This testing approach focuses on web services—self-contained, modular applications that can be described, published, and invoked over a network, typically the Internet.

XCode is Apple's integrated development environment (IDE) used for creating software for macOS, iOS, watchOS, and tvOS. It offers developers a comprehensive suite of tools designed to streamline the process of developing, testing, and debugging applications for Apple devices.

Web application testing is a comprehensive process that involves evaluating web applications for functionality, usability, security, compatibility, performance, and other critical factors. This type of testing ensures that web applications meet their specified requirements and can handle anticipated traffic while providing a seamless and secure user experience.

Web Performance Testing

Web performance testing is the process of evaluating a web application's speed, responsiveness, and stability under various load conditions. Web performance testing is crucial for identifying potential bottlenecks and issues with your app so you can fix them before your customers do.

Web Test Automation Tools

Web test automation tools are essential to creating a solid product and enabling continuous integration, agile development, and DevOps to keep up with constantly changing demand.

An open-source browser automation framework called WebDriver allows developers and testers to create automated tests that interact with web pages and validate the usability and behavior of web applications. With WebDriver, testing is possible on all OS by supporting a variety of web browsers, including Chrome, Firefox, Internet Explorer, and Safari.

Web testing is necessary for any web developer before making a web application or website live. Web testing is designed to check all aspects of the web application’s functionality, including looking for bugs in usability, compatibility, security, and general performance.

XPath Query

XPath is a query language that scans and manipulates data from XML documents. It allows RUEI to retrieve data from XML documents for content scanning.

Try LambdaTest Now !!

Get 100 minutes of automation test minutes FREE!!

Next-Gen App & Browser Testing Cloud

Did you find this page helpful?

Helpful

NotHelpful