The concept is the same as that of "data driving of test steps" in "UI automation test framework". The data driving of test steps in the interface is to encapsulate the parameters of the interface (such as method, url, param, etc.) into
Yaml file management. When the test steps change, you only need to modify the configuration in the yaml file.
**Data driven test data**
Data driven is the change of data, which drives the execution of automatic test and finally leads to the change of test results. In short, it is parametric application. For test cases with small amount of data, code parameterization can be used to realize data-driven. In case of large amount of data, it is recommended to use a structured file (such as yaml, json, etc.) to store the data, and then read these data in the test cases.
Parametric data driven
The principle is similar to that of "data driven test data" in the previous chapter "UI automation test framework". Still use @ pytest mark. parametrize
Decorator to parameterize and use parameterization to realize data-driven.
Through parameterization, it is determined that the parentid of the Department with id 2 and 3 is 1:
import pytest class TestDepartment: department = Department() @pytest.mark.parametrize("id", [2, 3]) def test_department_list(self, id): r = self.department.list(id) assert self.department.jsonpath(expr="$..parentid")[0] == 1
The above code first uses @ pytest mark. The parameter decorator passes two sets of data. The test results show that two test cases are executed instead of one test case. that is
pytest will automatically generate two corresponding test cases from two sets of test data and execute them to generate two test results.
Data driven using Yaml file
When the amount of test data is large, it can be considered to store the data in structured files. Read the data in the required format from the file and transfer it to the test case for execution. This actual battle is demonstrated by YAML. YAML is structured by using dynamic fields. It is data centric and more efficient than
excel, csv, Json, XML, etc. are more suitable for data-driven.
Store the two sets of data parameterized above into yaml file and create a data/department_list.yml file, the code is as follows:
-2-3
The above code defines a yaml
Data file in department format_ list. YML, a list is defined in the file. There are two data in the list. Finally, the data format is generated: [1,2]. Transform the parameterized data in the test case into
department_ list. Read from the YML file. The code is as follows:
class TestDepartment: department = Department() @pytest.mark.parametrize("id", \ yaml.safe_load(open("../data/department_list.yml"))) def test_department_list(self, id): r = self.department.list(id) assert self.department.jsonpath(expr="$..parentid")[0] == 1
The above code only needs to use yaml safe_ Load() method, read the Department_ list. The data in the YML file is transferred into the use case respectively
test_ department_ Complete the verification of input and result in the list () method.
**Configure data-driven**
In practical work, the environment switching and configuration are usually not completed in the form of hard coding in order to facilitate maintenance. In the chapter "interface test in multiple environments", it has been introduced how to take environment switching as a configurable option. This chapter will reconstruct this part of the content and complete the configuration of multiple environments in a data-driven way.
Environmental preparation
According to the chapter "interface test in multiple environments", change the environment configuration in this chapter to data-driven mode
The code is as follows:
#Modify the host to ip and attach host headerenv = {"docker. Testing studio. Com": {"dev": "127.0.0.1", "test": "1.1.1.2"}, "default": "dev"} data ["URL"] = str (data ["URL"]) replace( "docker.testing-studio.com", env["docker.testing-studio.com"][env["default"]])data["headers"]["Host"]="docker.testing-studio.com"
**Actual combat demonstration**
Still taking yaml as an example, put all the environment configuration information into env yml
File. If you are afraid of making mistakes, you can use yaml safe_ Dump (Env) converts the code in dict format to yaml.
As shown below, the printed configuration information is the configuration information successfully converted into yaml format:
def test_send(self): env={ "docker.testing-studio.com": { "dev": "127.0.0.1", "test": "1.1.1.2" }, "default": "dev" } yaml2 = yaml.safe_dump(env) print("") print(yaml2)
Paste the printed content into env In the YML file: env yml
docker.testing-studio.com: dev: "127.0.0.1" test: "1.1.1.2" level: 4default: "dev"
Modify the code in the environment preparation slightly, change the env variable from a typical dict to yaml safe_ Load read env yml:
# Change the host to ip and attach host headerenv = yaml safe_ load(open("./env.yml"))data["url"] = str(data["url"]).\ replace("docker.testing-studio.com", env["docker.testing-studio.com"][env["default"]])data["headers"]["Host"] = "docker.testing-studio.com"
In this way, the data-driven method can be realized by modifying env YML file to directly modify the configuration information.
** _
Come to Hogwarts test and development society to learn more advanced technologies of software testing and test development. The knowledge points include web automated testing, app automated testing, interface automated testing, test framework, performance testing, security testing, continuous integration / continuous delivery / DevOps, test left, test right, precision testing, test platform development, test management, etc, The course technology covers bash, pytest, junit, selenium, appium, postman, requests, httprunner, jmeter, jenkins, docker, k8s, elk, sonarqube, Jacobo, JVM sandbox and other related technologies, so as to comprehensively improve the technical strength of test and development engineers