Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to accept the system.
A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.
Testing of a software product or system conducted at the developer’s site by the end user.
Verifying a product is accessible to the people having disabilities.
Ad Hoc Testing
A testing phase where the tester tries to ‘break’ the system by randomly trying the system’s functionality. Can include negative testing as well.
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm.
Application Binary Interface (ABI)
A specification defining requirements for portability of applications in binary forms across different system platforms and environments.
Application Programming Interface (API)
A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ)
The use of software tools, such as automated testing tools, to improve software quality.
An inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the “eyes and ears” of management.
That part of software testing that is assisted with software tool(s) that does not require operator input, analysis, or evaluation.
Testing conducted at one or more end user sites by the end user of a delivered software product or system.
Basis Path Testing
A white box test case design technique that uses the algorithmic flow of the program to design tests.
Black Box Testing
Functional testing based on requirements with no knowledge of the internal program structure or data. Also known as closed-box testing. Black Box testing indicates whether or not a program meets required specifications by spotting faults of omission — places where the specification is not fulfilled.
Binary Portability Testing
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
An integration testing technique that tests the low-level components first using test drivers for those components that have not yet been developed to call the low-level components for test.
Testing that focuses on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Boundary Value Analysis
A test data selection technique in which values are chosen to lie along data extremes. Boundary values include maximum, mini-mum, just inside/outside boundaries, typical values, and error values.
A group process for generating creative and diverse ideas.
Branch Coverage Testing
A test method satisfying coverage criteria that requires each decision point at each possible branch to be executed at least once.
Testing wherein all branches in the program source code are tested at least once.
A test suite that exercises the full functionality of a product but does not test features in detail.
A design flaw that will result in symptoms exhibited by some object (the object under test or some other object) when an object is subjected to an appropriate test.
Cause-and-Effect (Fishbone) Diagram
A tool used to identify possible causes of a problem by representing the relationship between some effect and its possible cause.
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walk through
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored,to analyze the programmer’s logic and assumptions.
The generation of source code.
Another term for white-box testing. Structural testing is sometimes referred to as clear-box testing; since “white boxes” are considered opaque and do not really permit visibility into the code. This is also known as glass-box or open-box testing.
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
The end user that pays for the product received, and receives the benefit from the use of the product.
product (document) improvement and process improvement (document production and inspection).
A statistical method for distinguishing between common and special cause variation exhibited by processes.
Customer (end user)
The individual or organization, internal or external to the producing organization that receives the product.
A measure of the number of linearly independent paths through a program module.
Data Flow Analysis
Consists of the graphical analysis of collections of (sequential) data definitions and reference patterns to determine constraints that can be placed on data values at various points of executing the source program.
NOTE: Operationally, it is useful to work with two definitions of a defect
1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.
2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether
in the statement of requirements or not.
Using defects as data for continuous quality improvement. Defect analysis generally seeks to classify defects into categories and identify possible causes in order to direct process improvement efforts.
Ratio of the number of defects to program length (a relative number).
A form of manual static analysis usually performed by the originator. Source code documentation, etc., is visually checked against requirements and standards.
Data Flow Diagram
A modeling notation that represents a functional decomposition of a system.
Data Driven Testing
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet.
The process of finding and removing the causes of software failures.
Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
A test that exercises a feature of a product in full detail.
Testing software through executing it.
The process of evaluating a program based on execution of that program. Dynamic analysis approaches rely on executing a piece of software with
selected test data.
1) A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition; and
2) a mental mistake made by a programmer that may result in a program fault.
Testing where information about programming style, error-prone language constructs, and other programming knowledge is applied to select test data capable of detecting faults, either a specified class of faults or all possible faults.
The process of examining a system or system component to determine the extent to which specified properties are present.
The process of a computer carrying out an instruction or instructions of a computer.
Checks for memory leaks or other problems that may occur with prolonged execution.
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Testing which covers all combinations of input values and preconditions for an element of the software under test.
The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered.
Testing based on the knowledge of the types of errors made in the past that are likely for the system under test.
A manifestation of an error in software. A fault, if encountered, may cause a failure.
Fault Tree Analysis
A form of safety analysis that assesses hardware safety to provide failure statistics and sensitivity analyses that indicate the possible effect of critical failures.
Testing that employs a test data selection strategy designed to generate test data capable of demonstrating the absence of a set of pre-specified faults, typically, frequently occurring faults.
A diagram showing the sequential steps of a process or of a workflow around a product or service.
A technical review conducted with the end user, including the types of reviews called for in the standards.
A consistent measure of software size based on user requirements. Data components include inputs, outputs, etc. Environment characteristics include data communications, performance, reusability, operational ease, etc. Weight scale: 0 = not present, 1 = minor influence, 5 = strong influence.
Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black box testing.
Testing one particular module, functionality heavily.
Gray Box Testing
A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
Another term for failure-directed testing.
A graphical description of individual measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation.
A combination of top-down testing combined with bottom-up testing of prioritized or available components.
Incremental analysis occurs when (partial) analysis may be performed on an incomplete product to allow early feedback on the development of that product.
Program statement sequence that can never be executed.
Products, services, or information needed from suppliers to make a process work.
1) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems.
2) A quality improvement process for written material that consists of two dominant
To install or insert devices or instructions into hardware or software to monitor the operation of a system or component.
The process of combining software components or hardware components, or both, into an overall system.
An orderly progression of testing in which software components or hardware components, or both, are combined and tested until the entire system has been integrated.
A shared boundary. An interface might be a hardware component to link two devices, or it might be a portion of storage or registers accessed by two or more computer programs.
Checks the interfaces between program elements for consistency and adherence to predefined rules or axioms.
Testing that collects timing and processing information during program execution that may change the behavior of the software from its behavior in a real environment. Usually involves additional code embedded in the software being tested or additional processes running concurrently with software being tested on the same platform.
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Independent verification and validation is the verification and validation of a software product by an organization that is both technically and managerially separate from the organization responsible for developing the product.
The period that starts when a software product is conceived and ends when the product is no longer available for use. The software life cycle typically includes a requirements phase, design phase, implementation (code) phase, test phase, installation and checkout phase, operation and maintenance phase, and a retirement phase.
This term refers to making software specifically designed for a specific locality.
A white box testing technique that exercises program loops.
That part of software testing that requires operator input, analysis, or evaluation.
A value derived by adding several qualities and dividing the sum by the number of these quantities.
The act or process of measuring. A figure, extent, or amount obtained by measuring.
A measure of the extent or degree to which a product possesses and exhibits a certain quality, property, or attribute.
Testing a system or an Application on the fly, i.e. just few tests here and there to ensure the system or an application does not crash out.
A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants of the program.
Testing that is transparent to the software under test; i.e., testing that does not change the timing or processing characteristics of the software under test from its behavior in a real environment. Usually involves additional hardware that collects timing or processing information and processes that information on another platform.
Testing aimed at showing software does not work. Also known as “test to fail”.
Qualitative and quantitative parameters that specify the desired operational capabilities of a system and serve as a basis for deter-mining the operational effectiveness and suitability of a system prior to deployment.
Testing performed by the end user on software in its normal operating environment.
Products, services, or information supplied to meet end user needs.
Program analysis performed to identify all possible paths through a program, to detect incomplete paths, or to discover portions of the program that are not on any path.
Path Coverage Testing
A test method satisfying coverage criteria that each logical path through the program is tested. Paths through the program often are grouped into a finite set of classes; one path from each class is tested.
A methodical examination of software work products by the producer’s peers to identify defects and areas where changes are needed.
Managerial desires and intents concerning either process (intended objectives) or products (desired attributes).
Any deviation from defined standards. Same as defect.
The step-by-step method followed to ensure that standards are met.
The work effort that produces a product. This includes efforts of people and equipment guided by policies, standards, and procedures.
To change a process to make the process produce a given product faster, more economically, or of higher quality. Such changes may require the product to be changed. The defect rate must be maintained or reduced.
The output of a process; the work product. There are three useful classes of products, manufactured products (standard and custom), administrative/ information products (invoices, letters, etc.), and service products (physical, intellectual, physiological, and psychological). Products are defined by a statement of requirements; they are produced by one or more people working in a process.
To change the statement of requirements that defines a product to make the product more satisfying and attractive to the end user (more competitive). Such changes may add to or delete from the list of attributes and/or the list of functions defining a product. Such changes frequently require the process to be changed. NOTE: This process could result in a totally new product.
Testing wherein all paths in the program source code are tested at least once.
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as “Load Testing”.
Testing aimed at showing software works. Also known as “test to pass”.
The ratio of the output of a process to the input, usually measured in the same units. It is frequently useful to compare the value added to a product by a process to the value of the input resources required (using fair market values for both input and output).
A program that checks formal proofs of program properties for logical correctness.
Evaluating requirements or designs at the conceptualization phase, the requirements analysis phase, or design phase by quickly building scaled-down components of the intended system to obtain rapid feedback of analysis and design decisions.
Formal testing, usually conducted by the developer.