Search result: LY/T 3127-2019
Standard ID | Contents [version] | USD | STEP2 | [PDF] delivered in | Standard Title (Description) | Status | PDF |
LY/T 3127-2019 | English | 689 |
Add to Cart
|
5 days [Need to translate]
|
Specifications for forestry application system quality control and test
| Valid |
LY/T 3127-2019
|
Standard ID | LY/T 3127-2019 (LY/T3127-2019) | Description (Translated English) | Specifications for forestry application system quality control and test | Sector / Industry | Forestry Industry Standard (Recommended) | Classification of Chinese Standard | B60 | Classification of International Standard | 65.020.99 | Word Count Estimation | 30,382 | Date of Issue | 2019-10-23 | Date of Implementation | 2020-04-01 | Regulation (derived from) | Announcement of the State Forestry and Grassland Administration No. 17 of 2019 | Issuing agency(ies) | State Forestry and Grassland Administration |
LY/T 3127-2019
(Forestry application system quality control and testing)
ICS 65.020.99
B 60
LY
People's Republic of China Forestry Industry Standard
Forestry application system quality control and testing
Specifications for forestry application system quality control and test
2019-10-23 released
2020-04-01 implementation
Published by the National Forestry and Grassland Administration
Contents
1 Scope ... 1
2 Normative references ... 1
3 Terms and definitions ... 1
4 Quality control of forestry application systems ... 1
4.1 Overview ... 1
4.2 Quality Assurance Plan Development ... 1
4.3 Quality Assurance Plan Implementation ... 2
4.4 Quality Evaluation ... 2
4.5 Quality Problem Management ... 2
5 Forestry Application System Quality Testing ... 2
5.1 Overview ... 2
5.2 Test purpose ... 2
5.3 Test categories ... 3
5.4 Test contents ... 3
5.5 Test activities ... 4
5.6 Test method ... 4
5.7 Test cases ... 5
5.8 Test Management ... 6
5.9 Test review ... 6
Appendix A (informative) Common templates for test documents ... 8
Appendix B (Normative Appendix) Unit Test Instructions ... 10
Appendix C (Normative Appendix) System Test Instructions ... 14
Appendix D (Normative Appendix) Regression Test Instructions ... 21
References ... 25
Foreword
This standard was drafted in accordance with the rules given in GB/T 1.1-2009.
This standard was proposed by the Information Center of the State Forestry and Grassland Administration.
This standard is under the jurisdiction of the National Forestry Information and Data Standardization Technical Committee (SAC/TC 386).
Responsible for drafting this standard. Beijing Forestry University, National Forestry and Grassland Information Center, National Engineering Research Institute of Software Engineering, Peking University
Research Center.
The main drafters of this standard. Li Dongmei, Gu Hongbo, Liu Xueyang, Tan Wen, Su Xiang, Zhuang Tingting, Li Dongyuan, Lin Danqiong, Zhang Qi, Li
heron.
Forestry application system quality control and testing
1 Scope
This standard specifies requirements for quality control and quality testing during the development and maintenance of forestry application systems.
This standard applies to the development and maintenance of forestry application systems. It also applies to the internal control management, self-test, and third-party testing of system developers.
Institutional evaluation.
Note. The forestry application system is only the forestry application software.
2 Normative references
The following documents are essential for the application of this document. For dated references, only the dated version applies to this document.
For undated references, the latest version (including all amendments) applies to this document.
GB/T 8566 Information technology software life cycle process
GB/T 12504 Computer Software Quality Assurance Plan Specification
LY/T 2265 Forestry information terminology
LY/T 2925 Forestry Information System Quality Specification
3 terms and definitions
The terms and definitions defined in GB/T 8566-2007, GB/T 12504-1990 and LY/T 2265-2014 apply to this document.
3.1
Forestry application system
Software or procedures to solve application problems in forestry informatization.
4 Quality control of forestry application systems
4.1 Overview
The content of quality control in forestry application systems includes quality assurance plan formulation, quality assurance plan implementation, quality evaluation, and quality issues
management.
4.2 Quality Assurance Plan Development
Quality assurance plan development refers to the project developer's determination of process activities,
Contents, methods and time of work product inspection.
When developing different forestry application systems, their quality assurance plans should reflect different priorities based on actual business needs.
The quality assurance plan should include.
a) Management, describing the organization, tasks and related responsibilities responsible for quality assurance of forestry application systems;
b) Documents, lists the documents that need to be prepared in the stages of forestry application system development, verification, validation, use and maintenance, and describes
Describe the criteria for review and inspection;
c) Review and inspection, describe the technical and management review and inspection work to achieve quality assurance, list the different projects
Phase of quality control activities, and prepare or reference relevant review and inspection procedures, technical judgment criteria;
d) Configuration management, describing the content of configuration management of forestry application systems;
e) tools, techniques and methods, indicating the tools, techniques and methods used to support the quality assurance of forestry application system projects, description
And explain its purpose and use;
f) control, describing the control content related to the forestry application system;
g) Record collection, maintenance, and retention, indicating the records of quality assurance activities that need to be maintained, and provisions for summary, protection and maintenance of records
Recording methods and facilities, and shelf life.
4.3 Quality Assurance Plan Implementation
Quality assurance plan execution refers to the organization of quality by the quality assurance personnel according to the content of the quality assurance plan, according to time nodes or project milestones
Quality assurance activities specified in the assurance plan.
4.4 Quality Evaluation
Quality evaluation refers to the evaluation of the function and performance of the system according to the work products such as relevant documents and programs developed by the system.
The focus of quality evaluation of forestry application systems should include.
a) functional characteristics;
b) reliable characteristics;
c) easy-to-use features;
d) Efficiency characteristics.
The specific capabilities of forestry application systems should be comprehensively evaluated through tests and other methods, and the evaluation process and results should be correct,
Objective, concise and complete.
4.5 Quality Problem Management
Quality problem management refers to the systematic management of system problems found during the review or testing process. Its requirements include.
a) Find problems, find and record system problems during the review or test process, and describe the problems comprehensively and specifically;
b) Analyze the reasons, analyze the causes of the recorded problems, and consider the objective factors such as procedures and equipment and personnel in the analysis process
Observation factors, and form a detailed description of the problem;
c) Formulate corrective measures, systematically formulate reasonable and effective solutions according to the cause of the problem, and identify the responsible person
Staff and time nodes;
d) Zero the problem and implement the corrective measures that have been formulated to avoid recurrence of the problem. If the corrective measures fail to achieve the expected results, they should
Perform steps b) -c) again.
5 Forestry application system quality test
5.1 Overview
The forestry application system quality test process should be guided by a complete test plan, and should be based on the test purpose, test category, test content,
The test process, test methods, test cases and test management form a complete closed loop to ensure the efficiency of test work.
5.2 Test purpose
The purpose of each test of the forestry application system should be specified. The test purposes include.
a) identify errors in the development of forestry application systems and ensure that they are fixed;
b) verify that the system meets the quality requirements specified in the project development contract, requirements description, and system design documents;
c) Provide a basis for evaluating the quality of the system.
5.3 Test categories
Test categories include.
a) Unit testing refers to testing the smallest measurable unit of the forestry application system to check whether it meets the requirements and find any errors.
Error, see Appendix B for specific test content and description;
b) System test means testing the forestry application system to check whether it can work normally and meet the system in the real working environment.
The design document specifies the requirements. For specific test content and description, see Appendix C;
c) Regression test refers to the test of objects that have not passed unit or system tests, as well as changed objects.
See Appendix D for content and instructions.
5.4 Test content
Test content includes.
a) Functional characteristics, test the functions specified in the project development contract, requirements description, and system design documents;
b) capability characteristics, test the degree of forestry application systems developed to meet business needs and the ability to operate reliably;
c) Date/time characteristics. Test the ability of forestry application systems developed for date/time control. See LY/T 2925 for requirements.
-4.2.1 in.2017;
d) Throughput characteristics, to test the efficiency of forestry application systems to complete tasks. For requirements, see 4.2.14 in LY/T 2925-2017;
e) Interoperability, testing the forestry application system's interaction capabilities. For requirements, see 4.2.23 in LY/T 2925-2017;
f) Accuracy characteristics to test the correctness of events, situations or data in forestry application systems. For requirements, see LY/T 2925-2017
4.2.24 of
g) Resilience characteristics, test the ability of the forestry application system to recover in the event of an error. See LY/T 2925-2017 for requirements.
4.2.29 of;
h) Protection characteristics, testing the ability of forestry application systems to protect resources or information. For requirements, see LY/T 2925-2017.
4.2.34;
i) Security characteristics, testing the ability of forestry application systems to be used safely. For requirements, see 4.2.35 in LY/T 2925-2017;
j) access control characteristics. Test the forestry application system's protection ability against unauthorized access to resources, see LY/T 2925-2017
4.2.36 of;
k) Data protection characteristics, testing the forestry application system's protection ability against unauthorized access to data. For requirements, see LY/T 2925-2017
4.2.37 of;
l) To identify characteristics and test the ability of authentication strategies in forestry application systems, see 4.2.39 in LY/T 2925-2017 for requirements;
m) Usability, the user satisfaction ability of the service provided by the tested forestry application system. For requirements, see LY/T 2925-2017.
4.2.40;
n) Reliability, test the ability of forestry application systems to work without faults. For requirements, see 4.2.44 in LY/T 2925-2017;
o) Fault tolerance characteristics, test the ability of forestry application systems to provide services normally in the presence of faults, see LY/T 2925 for requirements
-4.2.45 in.2017;
p) Maintainability, testing the repairability of forestry application systems when they become unavailable. For requirements, see LY/T 2925-2017.
4.2.48;
q) Sharing characteristics to test the shareability of forestry application systems. The requirements are listed in 4.2.50 in LY/T 2925-2017.
5.5 Test activities
5.5.1 Overview
Forestry application system quality testing activities include test planning, test design and implementation, test execution, and test summary.
5.5.2 Test planning
Test planning work includes.
a) determine what content or quality characteristics need to be tested;
b) determine the adequacy of the test;
c) propose basic methods for testing;
d) determine the resources and technical requirements for testing;
e) Develop test resource plans and test schedules.
A reasonable test plan should be selected according to the business relevance of the forestry application system to ensure the comprehensiveness and integrity of the test.
5.5.3 Test Design and Implementation
Test design and implementation work includes.
a) Analyze the hierarchical structure of the test case set, select and design test cases;
b) obtaining and verifying test data;
c) Determine the execution order of test cases according to constraints such as test resources and risks;
d) Obtaining test resources and, if required, developing corresponding test software;
e) establish and calibrate the test environment;
f) conducting a test readiness review, which means reviewing the rationality of the test plan, the correctness, validity, and adequacy of test cases,
Review the testing organization, environment, and equipment for completeness and compliance.
5.5.4 Test execution
Test execution work includes.
a) Execute test cases and obtain test results;
b) Analyze and judge the test results, and take corresponding measures according to different judgment results;
c) Check the normal or abnormal termination during the test, and based on the check results, test that did not meet the test termination conditions
Use case, giving the decision to stop testing, or modify, supplement the test case set and further test.
5.5.5 Test summary
Test summary work includes.
a) Evaluate the test effect and the tested items, describe the test status, including the differences between the actual test and the test plan and test instructions, test
Sufficiency analysis, unresolved test events, etc .;
b) describe the status of the item being tested, including the differences between the item being tested and the requirements, system errors found, etc .;
c) Complete the system test report and pass the test review.
5.6 Test method
5.6.1 Static test method
Static testing methods include. checklists and static analysis methods, where.
a) The static test method for the document shall be carried out in the form of a checklist;
b) The static test method for code shall be conducted in the form of code review, code walkthrough and static analysis. Static analysis includes.
Stream analysis, data flow analysis, interface analysis and expression analysis.
Testers should review, walk through, or statically analyze system code.
5.6.2 Dynamic test method
Dynamic test methods include. white-box test methods and black-box test methods, of which.
a) White box testing methods can use control flow testing (including. statement coverage test, branch coverage test, conditional coverage test, condition
Combined coverage testing, path coverage testing), data flow testing, program mutation, program instrumentation, domain testing, and symbol evaluation.
b) Black box testing methods can use functional decomposition, boundary value analysis, decision table, cause and effect diagram, random test, error guessing method and orthogonal
Test method and so on.
In the dynamic test process, appropriate test methods should be adopted to achieve the test requirements, of which. the system test mainly uses the black box test method
Method, the unit test mainly uses the white box test method, and is supplemented by the black box test method.
5.7 Test cases
5.7.1 Test Case Design Principles
The design of the test case follows.
a) Based on the principle of test requirements, test cases should be designed according to the different requirements of the test category. Of which. unit test basis system
Detailed design instruction documents, system testing according to user requirements related documents;
b) Based on the principles of test methods, the test case design method used should be specified. In order to achieve different test adequacy requirements,
Equivalent class division, boundary value analysis, error estimation method, causality diagram, etc.
c) The principle of considering both the adequacy and efficiency of testing. The test case set should consider both the adequacy of testing and the efficiency of testing. Each test case
The content should be complete and operable;
d) The repeatability principle of test execution shall ensure the repeatability of test case execution.
5.7.2 Test case elements
Test case elements include.
a) name and identification, each test case should have a unique name and identification;
b) Test tracing, indicating the source of the content on which the test is based. For example, the basis of the system test is user requirements and the basis of the unit test
system design;
c) use case description, briefly describe the test object, purpose and test method used;
d) The initialization requirements of the test include.
1) Hardware environment, the hardware environment in which the system under test runs;
2) Software environment, the software environment required by the system under test, including the initial conditions of the test;
3) Test configuration, the configuration of the test system, including the configuration of the simulation system and test tools used for testing;
4) Parameter setting, including the settings of the flag, the first breakpoint, the pointer, the control parameters and the initialization data before the test starts.
Set
5) Other special instructions for test cases.
e) Test input, including all test commands, data, and signals sent to the object under test during test case execution.
include.
1) The specific content of each test input (including the determined value, status or signal, etc.) and its properties (including valid values, no
Effect value, boundary value, etc.);
2) The source of the test input (including test program generation, disk file, receiving through the network, manual keyboard input, etc.)
And the method used to select the input (including equivalence class division, boundary value analysis, error guessing, causality diagram, function diagram method
Law, etc.);
3) State whether the test input is real or simulated;
4) Test the chronological or event sequence of the inputs.
f) Expected test results, indicating the expected test results generated by the system under test during the execution of the test case, that is, verified that
Correct result. Expected test results should have specific content, including certain values, states, or signals, etc., and should not be inaccurate
Conceptual or general description;
g) Criteria for evaluating test results and criteria for judging whether the intermediate and final results generated during test case execution are correct. mainly include.
1) Accuracy required for actual test results;
2) Upper and lower allowable differences between actual test results and expected results;
3) the maximum and minimum interval of time, or the maximum and minimum of the number of events;
4) Conditions for retesting when the actual test results are uncertain;
5) Error handling related to generating test results;
6) Other criteria not mentioned here.
h) Operation process, the implementation of test case execution steps, the test operation process is defined as a series of phases arranged in the execution order
The independent steps include.
1) Test operation action, test program input, equipment operation required for each step;
2) Expected test results at each step;
3) Evaluation criteria for each step;
4) Action or error indication accompanying program termination;
5) The process of obtaining and analyzing actual test results.
i) Prerequisites and constraints, all prerequisites and constraints imposed in the test case description, if there are special restrictions, parameter deviations
Or exception handling, they should be identified and their impact on test cases explained;
j) Test termination conditions, indicating the conditions for normal and abnormal termination of the test.
Test case element descriptions should be written based on test case elements (see Appendix A.1)
5.8 Test Management
5.8.1 Process Management
Testing should be performed by relatively independent personnel. According to the scale of the project and the test category, the test work can be organized by different organizations.
Shi.
Test activities and test resources should be managed during the test.
5.8.2 Configuration Management
According to the requirements of forestry application system configuration management, various work products generated during the testing process should be included in the configuration management, of which.
a) The testing work carried out by the forestry application system development organization shall include the testing work products in the project's configuration management;
b) For system testing performed by an independent testing organization, a configuration management library should be established to include the tested objects and test work products in the configuration
management.
5.9 Test review
After the test is completed, review the test process and the validity of the test results to determine whether the test objectives have been achieved. Mainly for test records (its
For the format, see Appendix A.2) and test report for review. The specific content and requirements include.
a) review the completeness, correctness and standardization of the contents of documents and records;
b) review the independence and effectiveness of testing activities;
c) review whether the test environment meets the test requirements;
d) review the consistency of test records, test data and test report content with the actual test process and results;
e) review the consistency of the actual testing process with the test plan and test instructions;
f) review the rationality of untested items and newly added test items;
g) review the authenticity and correctness of the test results;
h) Review the correctness of the handling of abnormalities during testing.
AA
Appendix A
(Informative appendix)
Common templates for test documents
A.1 Description of test cases for forestry application systems
The test case document should describe the test case information in detail, and its format is shown in Table A.1.
Table A.1 Forestry application system test case description template
Use case name Use case identification
Test tracking
Use case description
Use case initialization
Hardware Configuration
Software configuration
Test configuration
parameter settings
Operation process
Number input and operation instructions Expected test result evaluation standard remarks
Prerequisites and constraints
Process termination condition
Outcome Evaluation Criteria
Designer design date
A.2 Test records of forestry application systems
The test record shall describe what happened during the test, and its format is shown in Table A.2.
Table A.2 Test records of forestry application systems
Use case name Use case identification
Business criticality
Use case description
Use case initialization
Hardware Configuration
Software configuration
Test configuration
parameter settings
Operation process
Number input and operation instructions Expected test result evaluation standard remarks
Whether it happened
Restart
Restart is
No success
Whether to send
Failure
Whether there is a failure
Test conclusion
Tester test date
Appendix B
(Normative appendix)
Unit test instructions
B.1 Test object and purpose
B.1.1 Test object
The unit test object is the smallest testable unit of the forestry application system. Object-oriented test methods should include classes and subsystems
And components, in the structured test method, should include procedures, functions and components.
B.1.2 Test purpose
The purpose of unit testing is to check whether each unit in the forestry application system can correctly implement the functions, performance, connection
And other design constraints, and find possible errors in the unit.
B.2 Organization and management of tests
Generally, the developer of the forestry application system organizes and implements unit testing, and can also entrust a third party to perform unit testing.
Unit test work products should be included in the project's configuration management.
B.3 Technical requirements
Unit tests include the following technical requirements.
a) The function, performance and interface of the system unit specified in the forestry application system design document shall be tested item by item;
b) each system characteristic shall be covered by at least one normal test case and one approved abnormal test case;
c) The input of the test case shall include at least valid equivalence class values, invalid equivalence class values and boundary data values;
d) Before dynamic testing of forestry application system units, the source code of the unit should generally be statically tested;
e) Sentence coverage reached 100%;
f) branch coverage should reach 100%;
g) Testing the output data and its format.
B.4 Test content
......
......
|