HOME   Cart(0)   Quotation   About-Us Policy PDFs Standard-List
www.ChineseStandard.net Database: 189760 (18 Oct 2025)

GB/T 38634.4-2020 English PDF

US$2339.00 · In stock
Delivery: <= 12 days. True-PDF full-copy in English will be manually translated and delivered via email.
GB/T 38634.4-2020: Systems and software engineering - Software testing - Part 4: Test techniques
Status: Valid
Standard IDContents [version]USDSTEP2[PDF] delivered inStandard Title (Description)StatusPDF
GB/T 38634.4-2020English2339 Add to Cart 12 days [Need to translate] Systems and software engineering - Software testing - Part 4: Test techniques Valid GB/T 38634.4-2020

PDF similar to GB/T 38634.4-2020


Standard similar to GB/T 38634.4-2020

GB/T 42450   GB/T 40685   GB/T 39788   GB/T 38634.5   GB/T 38634.2   GB/T 38634.3   

Basic data

Standard ID GB/T 38634.4-2020 (GB/T38634.4-2020)
Description (Translated English) Systems and software engineering - Software testing - Part 4: Test techniques
Sector / Industry National Standard (Recommended)
Classification of Chinese Standard L77
Classification of International Standard 35.080
Word Count Estimation 126,188
Date of Issue 2020-04-28
Date of Implementation 2020-11-01
Quoted Standard GB/T 38634.1; GB/T 38634.2; GB/T 38634.3
Adopted Standard (ISO/IEC/IEEE 29119-4-2015, MOD
Issuing agency(ies) State Administration for Market Regulation, China National Standardization Administration
Summary This standard specifies the test design techniques used in the GB/T 38634.2 test design and implementation process. This standard applies to, but is not limited to, testers, test managers, developers and project managers, especially those responsible for managing and implementing software testing.

GB/T 38634.4-2020: Systems and software engineering - Software testing - Part 4: Test techniques

---This is a DRAFT version for illustration, not a final translation. Full copy of true-PDF in English version (including equations, symbols, images, flow-chart, tables, and figures etc.) will be manually/carefully translated upon your order.
Systems and software engineering - Software testing - Part 4.Test techniques ICS 35.080 L77 National Standards of People's Republic of China System and software engineering software testing Part 4.Testing Technology 2020-04-28 released 2020-11-01 implementation State Administration for Market Regulation Issued by the National Standardization Management Committee

Table of contents

Preface Ⅲ Introduction Ⅳ 1 Scope 1 2 Compliance 1 3 Normative references 1 4 Terms and definitions 2 5 Test design technology 4 5.1 Overview 4 5.2 Test design techniques based on specifications 6 5.3 Structure-based test design technology 14 5.4 Test design techniques based on experience 19 6 Test coverage measurement 19 6.1 Overview 19 6.2 Test and measurement based on specification-based test design technology 20 6.3 Test measurement based on structure-based test design technology 22 6.4 Test measurement based on experience-based test design techniques 24 Appendix A (informative appendix) Test quality characteristics 25 Appendix B (informative appendix) Application guidelines and examples of test design techniques based on specifications 35 Appendix C (informative appendix) Application guidelines and examples of structure-based test design techniques 89 Appendix D (informative appendix) Application guidelines and examples of experience-based test design techniques 111 Appendix E (Informative Appendix) Application Guidelines and Examples of Exchangeable Test Design Techniques 114 Appendix F (informative appendix) Test design technical coverage effectiveness 117 Appendix G (informative appendix) Test design technology comparison 119 References 120 System and software engineering software testing Part 4.Testing Technology

1 Scope

This part of GB/T 38634 defines the test design techniques used in GB/T 38634.2 test design and implementation. This section is applicable to but not limited to testers, test managers, developers and project managers, especially those responsible for managing and implementing software. Personnel testing the software.

2 Compliance

2.1 Intended use The normative requirements of this part are contained in Chapter 5 and Chapter 6.Since certain projects or organizations may not need to use this definition Therefore, the implementation of this part usually involves selecting a set of technologies suitable for the project or organization. Organizations or individuals can A way to declare compliance with the provisions of this section. full compliance or tailored compliance. The organization or individual should declare whether it requires complete or tailored compliance with this section. 2.2 Full compliance By demonstrating that all of the selected technology set in Chapter 5 (not empty) and/or the corresponding test coverage measurement methods in Chapter 6 are met Requirements (ie sentences described as "should") can be claimed to be fully compliant. Example. An organization can choose to meet only one technology, such as boundary value analysis. In this case, the organization only needs to provide evidence that they have met the technical requirements. The technical requirements can be claimed to meet this part. 2.3 Tailoring compliance By proving that the selected requirements from the selected technology set (not empty) and/or the corresponding test coverage measurement method have been met Set to achieve tailoring compliance. In the case of tailoring, such as not fully following the technology defined in Chapter 5 or the measurement norms defined in Chapter 6 Requests, should provide reasons (whether it is tailored directly or tailored by reference). All tailoring decisions should document their reasons, including considerations Any applicable risks. Tailoring should be negotiated by stakeholders.

3 Normative references

The following documents are indispensable for the application of this document. For dated reference documents, only the dated version applies to this article Pieces. For undated references, the latest version (including all amendments) applies to this document. GB/T 38634.1 System and Software Engineering Software Testing Part 1.Concepts and Definitions (GB/T 38634.1-2020, ISO /IEC /IEEE29119-1.2013, MOD) GB/T 38634.2 System and Software Engineering Software Testing Part 2.Test Process (GB/T 38634.2-2020, ISO /IEC /IEEE29119-2.2013, MOD) GB/T 38634.3 System and Software Engineering Software Testing Part 3.Test Document (GB/T 38634.3-2020, ISO /IEC /IEEE29119-3.2013, MOD)

4 Terms and definitions

The following terms and definitions apply to this document. 4.1 Backus Paradigm A formal metalanguage. Used to define the grammar of a language in text format. 4.2 Basic choice See basic value (4.3). 4.3 Basic value Enter the value of the parameter in the "basic selection test", usually based on the representative value or typical value of the parameter. Also called basic select. 4.4 Calculation using c-use See calculation data usage (4.5). 4.5 Calculation data usage In any type of statement, the use of variable values. 4.6 Condition Boolean expressions that do not contain Boolean operators. 4.7 Control flow The sequence of operations performed during the run of the test item. 4.8 Control flow subpath The sequence of executable statements in the test item. 4.9 Data definition Variable assignment statement. Also called variable definition. 4.10 Data definition calculation use pair Data definition and subsequent calculation of data usage, where data usage is the value defined in the data definition. 4.11 Data definition predicates use pairs Data usage of data definition and subsequent predicates, where data usage is the value defined in the data definition. 4.12 Data definition usage Data definition and subsequent data usage, where data usage is the value defined in the data definition. 4.13 Data usage Executable statements to access variable values. 4.14 judgement result The result of the decision formula can be used to determine the direction of control flow selection. 4.15 Decision rule A combination of conditions (also called causes) and actions (also called results) that produce a specific result in the decision table test and cause-effect diagram. 4.16 Define the use of Data definition and subsequent predicate data use or calculated data use, where data use is the value defined in the use data definition. 4.17 Define the usage path The control flow sub-path from variable definition to predicate use (p-use) or computational use (c-use). 4.18 Entry point The point in the test item where the test item starts to execute Note. The entry point is the executable statement of the test item, which can be selected by an external process as the starting point of one or more paths through the test item. It is usually a test The first executable statement in the item. 4.19 Executable statement A statement that will be converted into target code after compilation. The target code will be executed programmatically when the test item is running, and may be Order data to perform operations. 4.20 Exit point The last executed statement of the test item. Note. The exit point is the end of the path through the test item, which is the executable statement in the test item. It either terminates the test item or returns control to an external process. This is usually the last executable statement in the test item. 4.21 Predicate use See Predicate Data Usage (4.25). 4.22 Key-value pair The combination of the test item parameter and the value assigned to the parameter. In the combined test design technology, it is used as test conditions and test coverage items. 4.23 path The sequence of executable statements in the test item. 4.24 predicate A logical expression whose calculation result is "true" or "false" is usually used to guide the execution path in the code. 4.25 Predicate data usage The "data usage" associated with the judgment result of the predicate part of the judgment sentence. 4.26 Subpath Part of a larger path. 4.27 Test model The representation of the test items used in the test case design process. 4.28 Variable definitions See data definition (4.9).

5 Test design technology

5.1 Overview This section defines test design techniques based on specifications (5.2), structure-based test design techniques (5.3) and experience-based Test design technology (5.4). In a specification-based test, the test basis (such as requirements, specifications, models, or user requirements) is design The primary source of information for test cases. In structure-based testing, the structure of the test item (such as source code or model structure) is used for design testing. The primary source of information for the case. In experience-based testing, the knowledge and experience of testers is the primary source of information for designing test cases. For specification-based testing, structure-based testing, and experience-based testing, the test basis is used to generate the expected results. The above test The design techniques are complementary, and the combined use of these techniques will make the test more effective. Although the technology introduced in this section is divided into three categories. specification-based, structure-based, and experience-based, in actual use, this These technologies can be used interchangeably (for example, branch testing can design test cases to test the logical path of an Internet system graphical user interface path). An example of this is given in Appendix E. In addition, although each technology is defined independently of all other technologies, in fact they Can be combined with other technologies. Example 1.Test coverage items derived from equivalence class division can be used as input parameters of test cases derived from scenario testing. This section uses the terms of specification-based testing and structure-based testing. This technical classification is also called "black box testing" and "white Box testing". The terms "black box testing" and "white box testing" refer to the visibility of the internal structure of the test item. For black box testing, the internal structure of the test item The internal structure of the test item is invisible; for white box testing, the internal structure of the test item is visible. When a test design technique is also based on test items The specification and structure of the test design technique is called "gray box testing". This part defines step TD2 (derived test) in the general test design and implementation process described in GB/T 38634.2 (see introduction). Condition), TD3 (Export Test Coverage Item) and TD4 (Export Test Case) are how to apply each test design technique. Not mentioned in this section The specific context for the use of these technologies is defined, that is, it does not explain the method used by each technology in all situations. Users of this section can refer to Examine Appendix B, Appendix C, Appendix D, and Appendix E for detailed examples of how to apply these techniques. Appendix F on structure-based testing The coverage relationship between design technologies is explained. The technologies defined in this section are shown in Figure 2.This group of technologies is not comprehensive, and the technologies used by some testers and researchers are not included. Included in this section. Appendix G gives the mapping relationship between the technology defined in this part and the test design technology in GB/T 15532-2008. Example 3.Equivalence class division and boundary value analysis are based on equivalence classes. In the export test case step (TD4) of each technology, the created test case can be "valid" (that is, the input of the test item is considered Accept it correctly) or "invalid" (that is, at least one input of the test item is regarded as incorrect and rejected, ideally giving an appropriate error prompt). In some technologies, such as equivalence class division and boundary value analysis, the "one-to-one" method is usually used to derive invalid test cases, because It avoids fault masking by ensuring that each test case contains only one invalid input value, while effective testing usually uses the "minimize" method. Method to derive use cases, thus reducing the number of test cases required to cover effective test coverage items (see 5.2.1.3 and 5.2.3.3). Note. Invalid use cases are also called "negative test cases". Although the technologies defined in this section are described in separate items (seemingly mutually exclusive), in fact they can be Used in combination. Example 4.You can use boundary value analysis to select test input values, and then use paired tests to design test cases based on the test input values. Can use etc. Valence class is divided to select the classification of the classification tree and the class, and then use a single selection test (technique) to construct test cases according to the class. The techniques presented in this section can also be used in conjunction with the test types provided in Appendix A. For example, equivalence class division can be used for ease of use Identify user groups (test conditions) and representative users (test coverage items) during the test. This chapter gives a normative definition of technology. Chapter 6 gives the corresponding normative coverage measurements for each technology. Appendix B, Appendix C, Appendix D and Appendix E give informative examples of each technology. Although the examples of each technique given in this section are manual operations, In practice, automated operations can be used to support certain types of design and execution (for example, a statement coverage analyzer can be used to support result-based Structural test). Appendix A shows how to apply the test design techniques defined in this section to test GB/T 25000.10-2016 Examples of defined quality characteristics. 5.2 Test design technology based on specifications 5.2.1 Equivalence class division 5.2.1.1 Export test conditions (TD2) Equivalence class division uses the test item model to divide the test item input and output into equivalence classes (also called "partitions"), where each equivalence class Both should be used as a test condition. These equivalence classes should be derived from the test basis, for all values in each partition, they can be tested Items are treated similarly (ie the values in the equivalence class are "equal"). Both valid input and output and invalid input and output can export equivalence classes Divide. Example. For test items that expect lowercase alphabetic characters as (valid) input, the invalid input equivalence classes that can be derived include integers, real numbers, and uppercase letters The equivalence of parent characters, symbols, and control characters depends on the degree of rigor required during the test. Note 1.For the output equivalence class, the corresponding input partition is derived based on the process described in the test item specification. Then select test from the input partition enter. Note 2.Invalid output equivalence classes usually correspond to any output that is not explicitly specified. Since it is not specified, it is usually obtained based on the subjective judgment of each tester Equivalence class. When applying experience-based techniques (such as wrong guesses), this subjective form of test design may also appear. Note 3.The domain analysis method is usually classified as a combination of equivalence class division and boundary value analysis. 5.2.1.2 Export test coverage items (TD3) Each equivalence class should be a test coverage item (that is, in the equivalence class division, test conditions and test coverage items are the same equivalence class). 5.2.1.3 Export test case (TD4) The exported test case should implement each test coverage item (ie, equivalence class). The following are the steps to export test cases. a) Determine the combination method of selecting the test coverage items achieved by the test case. The following are two common methods. 1) One to one, each exported test case is used to cover a specific equivalence class; 2) Minimization, where the equivalence class is covered by test cases, so that the minimum number of test cases derived at least covers all equivalence classes once. Note. Other methods for selecting test cases to achieve test coverage items are given in 5.2.5 (combined test design technology). b) Use the method in step a) to select the test coverage items included in the current test case. c) Determine the input value of the test coverage items covered by the test case for execution, and any other input variables required by the test case Any valid value of. d) Apply the input to the test basis to determine the expected result of the test case. e) Repeat steps b)~d) until the required test coverage is reached. 5.2.2 Classification tree 5.2.2.1 Export test conditions (TD2) The classification tree method uses the test item model to divide the input of the test item, and uses the classification tree to graphically represent it. Measurement The input of the test item is divided into several "categories", each of which is composed of several independent (non-overlapping) "categories" and subcategories. Whole (all input fields of the modeled test item are identified and included in all categories). Each category should be a test condition. according to The rigor of the test, the "classes" obtained by decomposition and classification may be further divided into "subclasses". According to the required test coverage, export The division and class may include both valid and invalid input data. Shape the hierarchical relationship between classification, class, and subclass into a tree, and measure The input field of the test item is used as the root node of the tree, the classification is used as the branch node, and the category or sub-category is used as the leaf node. Note. The division process in the classification tree method is similar to the equivalence class division. The key difference is that in the classification tree method, the division (classification and category) must be complete. All disjoint, and in the equivalence class division, they may overlap, depending on how the technology is applied. In addition, the classification tree method also includes classification tree The design provides a visual display of the test conditions. 5.2.2.2 Export test coverage items (TD3) Test coverage items should be derived from the combination classification using the selected combination method. Example. Two examples of combining classification to export test coverage items. ---Minimize, export the minimum number of test coverage items, and cover all categories at least once. ---Maximize, each possible classification combination is covered by at least one test coverage item. Note 1.5.2.5 (Combined Test Design Technology) describes other methods of selecting a combination of test coverage items through test cases. Note 2.Test coverage items are usually represented by a combination table (see Figure B.5 of B.2.2.5). Note 3.In the earliest publication of the classification tree method, the terms "minimum" and "maximum" were used instead of "minimize" and "maximize". 5.2.2.3 Export test case (TD4) The exported test cases should implement each test coverage item. The following are the steps to export test cases. a) Based on the classification combination generated in step TD3, select a combination for the current test case, and require that the combination has not been tested Trial case coverage; b) Determine the input values that have not yet been assigned in each category; c) Determine the expected result of the test case by applying the input to the test basis; d) Repeat steps a)~c) until the required test coverage level is reached. 5.2.3 Boundary value analysis 5.2.3.1 Export test conditions (TD2) Boundary value analysis is to divide the input and output of the test item into identifiable boundaries through the analysis of the boundary value of the test item model Multiple ordered sets and subsets (partitions and sub-partitions), where each boundary is the test condition. The boundary should come from the test basis. Example. The definition of a partition is an integer from 1 to 10.The partition has two boundaries, the lower boundary is 1, and the upper boundary is 10.These are the test conditions. Note. For the output boundary, the corresponding input partition can be derived based on the process described in the test item specification. Then select the test input from the input section. 5.2.3.2 Export test coverage items (TD3) You can choose one of the following two methods to export test coverage items. --- Binary boundary test; ---Three-value boundary test. For binary boundary testing, two test coverage items should be derived for each boundary (test condition), these boundaries correspond to the value and The equivalence class of a certain incremental distance outside the boundary. This incremental distance should be defined as the smallest valid value of the data type under consideration. For a three-value boundary test, three test coverage items should be derived for each boundary (test condition), and these boundaries correspond to the value and Equivalence classes with a certain incremental distance on each side of the boundary. This incremental distance should be defined as the smallest valid value of the data type under consideration. Note 1.Some partitions can only identify one boundary on the test basis. For example, the data partition "age >=70 years old" has a lower boundary but no obvious Upper boundary. In some cases, the boundary imposed in the actual implementation can be used as a boundary value, such as the maximum acceptable value of the input domain (such decisions need to be Record; for example, record in the test specification document). Note 2.A two-value boundary test is sufficient in most cases; however, a three-value boundary value test may be required in some cases (e.g., the tester And the developer is rigorously testing to confirm that the boundary of the variable does not occur in the test item). Note 3.When conducting a binary boundary or a three-value boundary test, continuous partitions (partition shared boundaries) will result in repeated test coverage items. In this case, the classic The type of approach is to execute these repeated values only once. For examples of repeated boundaries, see B.2.3.4.3. 5.2.3.3 Export test case (TD4) The exported test cases should implement each test coverage item. The following are the steps to export test cases. a) There are two common methods for determining the combination of test coverage items achieved by selecting test cases. 1) One to one, each test case achieves a specified boundary value; 2) Minimize, export the minimum number of test cases to cover all boundary values at least once. Note 1.In the minimized boundary value analysis, each test case can cover multiple test coverage items. Note 2.5.2.5 (Combined Test Design Technology) describes other methods of selecting a combination of test coverage items through test cases. b) Use the method in step a) to select the test coverage items included in the current test case. c) Other input variables not selected by the test case in step b) take any valid values. d) Determine the expected result of the test case by applying the input to the test basis. e) Repeat steps b)~d) until the required test coverage level is reached. 5.2.4 Grammar test 5.2.4.1 Export test conditions (TD2) The grammar test uses the formal grammar of the test item input as the basis for the test design. The grammar model is expressed as multiple rules, each of which This rule defines the form of input parameters according to the element "sequence", element "iteration" or element "choice" in the grammar. Grammar can Expressed in the form of text or graphics. The test conditions in the grammar test should be all or part of the input model of the test item. Example 1.Backus paradigm is a formal metalanguage, which defines the grammar of a test item in the form of text. Example 2.The abstract syntax tree can graphically represent the formal grammar. 5.2.4.2 Export test coverage items (TD3) In grammar testing, test coverage items are derived based on two goals. positive tests, and the exported test coverage items are effectively covered in various ways Grammar; negative test, the exported test coverage items deliberately violate the rule grammar in various ways. Test coverage items for positive tests should be defined The "options...

Tips & Frequently Asked Questions:

Question 1: How long will the true-PDF of GB/T 38634.4-2020_English be delivered?

Answer: Upon your order, we will start to translate GB/T 38634.4-2020_English as soon as possible, and keep you informed of the progress. The lead time is typically 8 ~ 12 working days. The lengthier the document the longer the lead time.

Question 2: Can I share the purchased PDF of GB/T 38634.4-2020_English with my colleagues?

Answer: Yes. The purchased PDF of GB/T 38634.4-2020_English will be deemed to be sold to your employer/organization who actually pays for it, including your colleagues and your employer's intranet.

Question 3: Does the price include tax/VAT?

Answer: Yes. Our tax invoice, downloaded/delivered in 9 seconds, includes all tax/VAT and complies with 100+ countries' tax regulations (tax exempted in 100+ countries) -- See Avoidance of Double Taxation Agreements (DTAs): List of DTAs signed between Singapore and 100+ countries

Question 4: Do you accept my currency other than USD?

Answer: Yes. If you need your currency to be printed on the invoice, please write an email to [email protected]. In 2 working-hours, we will create a special link for you to pay in any currencies. Otherwise, follow the normal steps: Add to Cart -- Checkout -- Select your currency to pay.