Pytest is an easy-to-use, efficient and flexible unit test framework of Python, which can support unit test and function test. This article does not take the introduction of the pytest tool itself as the purpose, but takes an actual API test project as an example, applies the function of pytest to the actual test engineering practice, and teaches you to use pytest.
Before I start this article, I want to clarify two concepts, one is the testing framework and the other is the testing tool. Many people are easy to confuse them. The test framework is such as Unittest, ptest and TestNG, while the test tools refer to Selenium, Appium and Jmeter.
The function of test framework is to help us manage test cases, execute test cases, parameterize, assert, generate test reports and other basic work, and let us focus on the preparation of test cases. A good test framework should have high scalability, support secondary development, and be able to support multiple types of automated testing.
The function of test tools is to complete a certain type of test, such as Selenium for automatic test of WEB UI, Appium for automatic test of APP, Jmeter for API automatic test and performance test. In addition, OkHttp Library in Java language, requests Library in Python language, and these HTTP client s can also be regarded as an API testing tool.
Clarify these two concepts and say the purpose of this article. In fact, there are many online tutorials, including official documents, which are based on the introduction of the function of Pytest and list the use methods of various functions. After reading them, you will feel that you understand them, but you still don't know how to combine them with the actual project and use them. This article does not take the introduction of the Pytest tool itself as the purpose, but takes an actual API test project as an example. Through the combination of the unit test framework Pytest and Python Requests library, the Pytest function is applied to the actual test engineering practice to teach you to use Pytest.
Trust me, using Pytest will make your testing work very efficient.
01 Pytest core functions
Before you start using Pytest, let's understand the core functions of Pytest. According to the official website, it has the following functions and features:
- It's very easy to start, easy to get started, rich in documents, and there are many examples in the documents for reference.
- It can support simple unit test and complex function test.
- Parameterization is supported.
- Be able to execute all test cases, or select some test cases to execute, and repeat the failed cases.
- It supports concurrent execution and can run test cases written by nose and unittest.
- A convenient and simple way to assert.
- Be able to generate test results in standard Junit XML format.
- There are many third-party plug-ins, and you can customize the extension.
- Easy and continuous integration of tools.
The installation method of Pytest is the same as that of other python software. You can directly use pip to install Pytest.
pip install pytest
After the installation is completed, you can verify whether the installation is successful by:
pytest --help
If you can output help information, the installation is successful.
Next, through the development of an API automation test project, it introduces in detail how these functions are used.
02 create test project
Create a test project directory API first_ Pytest, to create a virtual environment for this project. For the creation of virtual environment, please refer to this article virtual environment . Here we will directly introduce how to use the following two commands:
mkdir api_pytest cd api_pytest virtualenv --python=python3 env
In this way, the project directory and virtual environment are created.
Application virtual environment
source env/bin/activate
Next, install the dependency package. The first one is to install pytest. In addition, this article takes the API automation test as an example, so install the HTTP client package requests.
pip install pytest requests
Now we create a data directory to store test data, a tests directory to store test scripts, a config directory to store configuration files, and a utils directory to store tools.
mkdir data tests config utils
Now, the directory structure of the project should be as follows:
├── config ├── data ├── env ├── tests └── utils
At this point, the test project is created. Then write test cases.
03 write test cases
In this part, we write test cases to test Douban movie list API and movie details API.
The two API information is as follows:
Interface example
Movie list http://api.douban.com/v2/movie/in_ theaters?apikey=0df993c66c0c636e29ecbb5344252a4a&start=0&count=10
Movie details https://api.douban.com/v2/movie/subject/30261964?apikey=0df993c66c0c636e29ecbb5344252a4a
First, we write the automatic test case of movie list API, and set three verification points:
1. Verify that the start in the request is consistent with the start in the response. -Verify that the count in the request matches the count in the response. -The title in the validation response is "films on show - Shanghai".
In the tests directory, create a testintheaters.py File, in which test cases are written, as follows:
import requests class TestInTheaters(object): def test_in_theaters(self): host = "http://api.douban.com" path = "/v2/movie/in_theaters" params = {"apikey": "0df993c66c0c636e29ecbb5344252a4a", "start": 0, "count": 10 } headers = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36" } r = requests.request("GET", url=host + path, headers=headers, params=params) response = r.json() assert response["count"] == params["count"] assert response["start"] == params["start"] assert response["title"] == "Films on show-Shanghai", "The actual title is:{}".format(response["title"])
You might ask, is this the Test case? Is this the Test case based on ptest? The answer is yes. There is no difference between writing automatic Test cases based on Pytest and writing normal Python code. The only difference is that the file name, function name or method name should start with Test or end with Test, and the class name should start with Test.
Py Test will be in test*.py or* test.py In the file, find the functions beginning with Test outside the class, or the methods beginning with Test inside the class beginning with Test, and manage these functions and methods as Test cases. You can view which Test cases Pytest collects through the following command:
$ py.test tests/testintheaters.py --collect-only ====================================================== test session starts ======================================================= platform linux -- Python 3.5.2, pytest-5.4.3, py-1.8.1, pluggy-0.13.1 rootdir: /root/api_pytest collected 1 item <Module tests/testintheaters.py> <Class TestInTheaters> <Function test_in_theaters> ===================================================== no tests ran in 0.10s ======================================================
From the results, we can see that there is a test case, testintheaters, the testintheaters method in the class.
Assertion in Python uses Python's own assert statement, which is very simple.
04 execute test case
Run this test as follows:
$ pytest tests/testintheaters.py ========================================================= test session starts ========================================================== platform linux -- Python 3.5.2, pytest-5.4.3, py-1.8.1, pluggy-0.13.1 rootdir: /root/api_pytest collected 1 item <Module tests/testintheaters.py> <Class TestInTheaters> <Function test_in_theaters> ======================================================== no tests ran in 0.08s ========================================================= (env) root@iZ2zec08ev0qz7nmp85ewsZ:~/api_pytest# pytest tests/testintheaters.py ========================================================= test session starts ========================================================== platform linux -- Python 3.5.2, pytest-5.4.3, py-1.8.1, pluggy-0.13.1 rootdir: /root/api_pytest collected 1 item tests/testintheaters.py F [100%] =============================================================== FAILURES =============================================================== ___________________________________________________ TestInTheaters.test_in_theaters ____________________________________________________ self = <testintheaters.TestInTheaters object at 0x7f83d108e4e0> def test_in_theaters(self): host = "http://api.douban.com" path = "/v2/movie/in_theaters" params = {"apikey": "0df993c66c0c636e29ecbb5344252a4a", "start": 0, "count": 10 } headers = { "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36" } r = requests.request("GET", url=host + path, headers=headers, params=params) response = r.json() assert response["count"] == params["count"] assert response["start"] == params["start"] > assert response["title"] == "Films on show-Shanghai", "The actual title is:{}".format(response["title"]) E AssertionError: The actual title is: movies on show-Beijing E assert 'Films on show-Beijing' == 'Films on show-Shanghai' E - Films on show-Shanghai E ? ^^ E + Films on show-Beijing E ? ^^ tests/testintheaters.py:19: AssertionError ======================================================= short test summary info ======================================================== FAILED tests/testintheaters.py::TestInTheaters::test_in_theaters - AssertionError: The actual title is: movies on show-Beijing ========================================================== 1 failed in 0.51s ===========================================================
Through the above test output, we can see that in the test process, a total of one test case is collected, the test result is failed (marked as F), and the detailed error information is output in the FAILURES section. Through this information, we can analyze the cause of the test failure. The reason for the failure of the above test case is that there was an error in asserting the title. The expected title is "movies in Shanghai" but actually "movies in Beijing". The comparison between the expected and the actual is very intuitive.
There are many ways to execute test cases py.test You can add different parameters later. I have listed the following:
$ py.test # run all tests below current dir $ py.test test_module.py # run tests in module $ py.test somepath # run all tests below somepath $ py.test -k stringexpr # only run tests with names that match the # the "string expression", e.g. "MyClass and not method" # will select TestMyClass.test_something # but not TestMyClass.test_method_simple $ py.test test_module.py::test_func # only run tests that match the "node ID", # e.g "test_mod.py::test_func" will select # only test_func in test_mod.py
The above usage is easy to understand through annotation. In the test execution process, these methods have the opportunity to be used. It's better to master them.
05 separation of data and script
The test cases in the section put the test data and test code in the same py file and in the same test method, resulting in tight coupling, which may affect each other when modifying the test data or test code, which is not conducive to the maintenance of test data and test script. For example, to add several groups of new test data for test cases, in addition to preparing test data, test code should be modified to reduce the maintainability of test code.
In addition, interface testing is often data-driven, and it is not convenient to parameterize test data and test code with Pytest.
Separation of test code and test data is a consensus in the field of testing. Create a Yaml file for storing test data in the data / directory testintheaters.yaml , as follows:
Students who are familiar with Yaml format should easily understand the content of the above test data file. In this test data file, there is an array of tests, which contains a complete test data. A complete test data consists of three parts:
-Case, representing the test case name. -http, representing the request object. -Expected, indicating the expected result.
The http request object contains all parameters of the tested interface, including request method, request path, request header and request parameters. Expected indicates the expected result. In the above test data, only the expected value of the request response is listed. In the actual test, the expected value of the database can also be listed.
The test script also needs to be modified and read testintheaters.yaml The file gets the request data and expected results, and then sends the request through requests. The modified test code is as follows:
import requests import yaml def get_test_data(test_data_path): case = [] # Store test case name http = [] # Storage request object expected = [] # Store expected results with open(test_data_path) as f: dat = yaml.load(f.read(), Loader=yaml.SafeLoader) test = dat['tests'] for td in test: case.append(td.get('case', '')) http.append(td.get('http', {})) expected.append(td.get('expected', {})) parameters = zip(case, http, expected) return case, parameters cases, parameters = get_test_data("../data/testintheaters.yaml") list_params=list(parameters) class TestInTheaters(object): def test_in_theaters(self): host = "http://api.douban.com" r = requests.request(list_params[0][1]["method"], url=host + list_params[0][1]["path"], headers=list_params[0][1]["headers"], params=list_params[0][1]["params"]) response = r.json() assert response["count"] == list_params[0][2]['response']["count"] assert response["start"] == list_params[0][2]['response']["start"] assert response["total"] == len(response["subjects"]) assert response["title"] == list_params[0][2]['response']["title"], "The actual title is:{}".format(response["title"])
Note that to read the Yaml file, you need to install the PyYAML package.
In the test script, a function gettestdata is defined to read the test data from the test data file testintheaters.yaml The test case name case, request object http and expected result are read in. These three parts are a list, which are compressed together by zip.
The testing methods haven't changed much, but the test data used to send the request is not written dead, but comes from the test data file.
In general, the function to read the test data is not defined in the test case file, but in the utils package, such as utils/commonlib.py Medium. At this point, the directory structure of the whole project should be as follows:
├── config ├── data │ └── testintheaters.yaml ├── env │ ├── bin │ ├── lib │ └── pyvenv.cfg ├── __init__.py ├── __pycache__ │ └── __init__.cpython-35.pyc ├── tests │ ├── __init__.py │ ├── __pycache__ │ └── testintheaters.py └── utils ├── commonlib.py ├── __init__.py └── __pycache__
In this way, we modify the test script and modify it testintheaters.py , change test data, modify testintheaters.yaml . But at present, it seems that we don't really see the powerful separation of test data and script, or even more valuable place, so let's go on.
06 parameterization
Above we separate the test data from the test script. If you want to add more test data for the test case, add more test data of the same format to the tests array. This process is called parameterization.
Parameterization means to test the same interface with a variety of different inputs to verify whether each set of input parameters can achieve the expected results. Pytest provides pytest.mark.paramtrize For parameterization in this way, let's take a look at the introduction provided by the official website pytest.mark.paramtrize Examples of usage:
# content of tests/test_time.py import pytest from datetime import datetime, timedelta testdata = [ (datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1)), (datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1)), ] @pytest.mark.parametrize("a,b,expected", testdata) def test_timedistance_v0(a, b, expected): diff = a - b assert diff == expected
Executing the above script will get the following output, test method_ timedistance_ V0 has been executed twice. The test data used for the first execution is the first tuple in the testdata list, and the test data used for the second execution is the second tuple in the testdata list. This is the effect of parameterization. The same script can use different input parameters to perform tests.
============================= test session starts ============================== platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/chunming.liu/.local/share/virtualenvs/api_pytest-wCozfXSU/bin/python cachedir: .pytest_cache rootdir: /Users/chunming.liu/learn/api_pytest/tests collecting ... collected 2 items test_time.py::test_timedistance_v0[a0-b0-expected0] PASSED [ 50%] test_time.py::test_timedistance_v0[a1-b1-expected1] PASSED [100%] ============================== 2 passed in 0.02s ===============================
Follow the cat and draw the tiger, and modify the test script in our own test project as follows.
import pytest import requests from api_pytest.utils.commonlib import get_test_data cases, list_params = get_test_data("../data/testintheaters.yaml") class TestInTheaters(object): @pytest.mark.parametrize("case,http,expected", list(list_params), ids=cases) def test_in_theaters(self, case, http, expected): host = "http://api.douban.com" r = requests.request(http["method"], url=host + http["path"], headers=http["headers"], params=http["params"]) response = r.json() assert response["count"] == expected['response']["count"] assert response["start"] == expected['response']["start"] assert response["title"] == expected['response']["title"], "The actual title is:{}".format(response["title"])
Add a decorator to the test method@ pytest.mark.parametrize , the decorator will automatically list(list_params) the first parameter to unpack and assign to the decorator. The comma separated variables in the first parameter of the decorator can be used as parameters of the test method. The values of these variables can be obtained directly in the test method, and these values can be used to initiate requests and make assertions. The decorator also has a parameter called ids, which will be printed to the test result as the name of the test case.
Before executing the modified test script, we add another set of test data to the test data file. Now the test data file contains two sets of test data:
--- tests: - case: Verification response start and count Consistent with the parameters in the request http: method: GET path: /v2/movie/in_theaters headers: User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36 params: apikey: 0df993c66c0c636e29ecbb5344252a4a start: 0 count: 10 expected: response: title: Films on show-Shanghai count: 10 start: 0 - case: Verification response title yes"Films on show-Beijing" http: method: GET path: /v2/movie/in_theaters headers: User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36 params: apikey: 0df993c66c0c636e29ecbb5344252a4a start: 1 count: 5 expected: response: title: Films on show-Beijing count: 5 start: 1
Now let's run the test script to see the effect:
============================================================================================================================ test session starts ============================================================================================================================= platform linux -- Python 3.5.2, pytest-5.4.3, py-1.8.1, pluggy-0.13.1 rootdir: /root/api_pytest/tests collected 2 items testintheaters.py F. [100%] ================================================================================================================================== FAILURES ================================================================================================================================== _________________________________________________________________ TestInTheaters.test_in_theaters[\u9a8c\u8bc1\u54cd\u5e94\u4e2d start \u548c count \u4e0e\u8bf7\u6c42\u4e2d\u7684\u53c2\u6570\u4e00\u81f4] __________________________________________________________________ self = <api_pytest.tests.testintheaters.TestInTheaters object at 0x7fc79177b048>, case = 'Verification response start and count Consistent with the parameters in the request' http = {'headers': {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chr...T', 'params': {'apikey': '0df993c66c0c636e29ecbb5344252a4a', 'count': 10, 'start': 0}, 'path': '/v2/movie/in_theaters'} expected = {'response': {'count': 10, 'start': 0, 'title': 'Films on show-Shanghai'}} @pytest.mark.parametrize("case,http,expected", list(list_params), ids=cases) def test_in_theaters(self, case, http, expected): host = "http://api.douban.com" r = requests.request(http["method"], url=host + http["path"], headers=http["headers"], params=http["params"]) response = r.json() assert response["count"] == expected['response']["count"] assert response["start"] == expected['response']["start"] > assert response["title"] == expected['response']["title"], "The actual title is:{}".format(response["title"]) E AssertionError: The actual title is: movies on show-Beijing E assert 'Films on show-Beijing' == 'Films on show-Shanghai' E - Films on show-Shanghai E ? ^^ E + Films on show-Beijing E ? ^^ testintheaters.py:20: AssertionError ========================================================================================================================== short test summary info =========================================================================================================================== FAILED testintheaters.py::TestInTheaters::test_in_theaters[\u9a8c\u8bc1\u54cd\u5e94\u4e2d start \u548c count \u4e0e\u8bf7\u6c42\u4e2d\u7684\u53c2\u6570\u4e00\u81f4] - AssertionError: The actual title is: movies on show-Beijing ======================================================================================================================== 1 failed, 1 passed in 0.63s =========================================================================================================================
From the results, Pytest collected 2 items, and the test script was executed twice. The first set of test data was used for the first execution, and the result was failure (F). The second set of test data was used for the second execution, and the result was passed (.). In the summary info section after execution, you can see some Unicode codes. This is actually the content of ids. Because it is Chinese, the Unicode code is displayed here by default. To display Chinese, you need to create a Pytest configuration file in the root directory of the test project pytest.ini , add the following code:
[pytest] disable_test_id_escaping_and_forfeit_all_rights_to_community_support = True
Execute the test script again, in the summary of test results_ In the Info section, the correct Chinese content will be displayed.
FAILED tests/test_ In_ theaters.py ::TestInTheaters::test_ In_ They [start and count in the validation response are consistent with the parameters in the request] - AssertionError:
According to this parameterization method, if you want to modify or add test data, you only need to modify the test data file.
Now, the directory structure of the automated test project should be as follows:
├── config ├── data │ └── testintheaters.yaml ├── env │ ├── bin │ ├── lib │ └── pyvenv.cfg ├── __init__.py ├── __pycache__ │ └── __init__.cpython-35.pyc ├── tests │ ├── __init__.py │ ├── __pycache__ │ ├── pytest.ini │ └── testintheaters.py └── utils ├── commonlib.py ├── __init__.py └── __pycache__
07 test configuration management
In the automatic test code, host is written in the test script, which is obviously not suitable for hard coding. This host will be used in different test scripts and should be maintained in a public place. If you need to modify it, you only need to modify one place. According to my practical experience, it is better to put it in the config folder.
In addition to host, other configuration information related to the test environment can also be placed in the config folder, such as database information, kafka connection information, and basic test data related to the test environment, such as test account. Many times, we will have different test environments, such as dev environment, test environment, stg environment, prod environment, etc. We can create subdirectories under the config folder to distinguish different test environments. Therefore, the config folder should have a structure similar to this:
├── config │ ├── prod │ │ └── config.yaml │ └── test │ └── config.yaml
At config.yaml The configuration information of different environments is stored in. Take the previous example, it should be as follows:
host: douban: http://api.douban.com
To split the test configuration information from the script, there needs to be a mechanism to read it to before it can be used in the test script. Pytest provides the fixture mechanism, through which we can perform some operations before the test execution. Here we use the fixture to read the configuration information in advance. Let's modify the example in the official document to introduce the use of fixture. Look at the following code:
import pytest @pytest.fixture def smtp_connection(): import smtplib connection = smtplib.SMTP_SSL("smtp.163.com", 465, timeout=5) yield connection print("teardown smtp") connection.close() def test_ehlo(smtp_connection): response, msg = smtp_connection.ehlo() assert response == 250 assert 0
In this code, the smtpconnection is decorated@ pytest.fixture Decoration, indicating that it is a fixture function. The function is to connect 163 mailbox server and return a connection object. When test_ After Ehlo's last test execution, print("teardown smtp") and connection.close() disconnect smtp.
Fixture function name can be used as test method_ The parameter of Ehlo, inside the test method, uses the variable fixture function name, which is equivalent to the return value of fixture function.
Back to our need to read test configuration information, create a file in the automated test project tests / directory conftest.py , define a fixture function env:
@pytest.fixture(scope="session") def env(request): config_path = os.path.join(request.config.rootdir, "config", "test", "config.yaml") with open(config_path) as f: env_config = yaml.load(f.read(), Loader=yaml.SafeLoader) return env_config
conftest.py The file is a plugin file, which can implement the Hook function provided by Pytest or the custom fixture function. These functions are only available in the conftest.py The directory and its subdirectories take effect. scope="session" indicates that the scope of this fixture function is session level, and it will be executed only once before the start of the entire test activity. In addition to the session level fixture function, there are function level, class level, etc.
The env function has a parameter request, which is also a fixture function. It's used here request.config.rootdir Property, which represents pytest.ini This configuration file is located in the directory of our test project pytest.ini At the root of the project, so config_ The complete path of path is:
/root/api_pytest/config/config.yaml
Pass env as a parameter into the testing methods, and change the host in the testing methods to Env ["host"] ["double"]:
import pytest import requests from api_pytest.utils.commonlib import get_test_data cases, list_params = get_test_data("../data/testintheaters.yaml") class TestInTheaters(object): @pytest.mark.parametrize("case,http,expected", list(list_params), ids=cases) def test_in_theaters(self, case, http, expected): host = env["host"]["douban"] r = requests.request(http["method"], url=host + http["path"], headers=http["headers"], params=http["params"]) response = r.json() assert response["count"] == expected['response']["count"] assert response["start"] == expected['response']["start"] assert response["title"] == expected['response']["title"], "The actual title is:{}".format(response["title"])
In this way, the test configuration file and the test script are separated from each other. If you need to modify the host, you only need to modify the configuration file, and the test script file does not need to be modified. The test execution method does not change after the modification.
In the implementation of the above env function, there is a slight defect, that is, the read configuration file is fixed, and the read configuration information is the configuration information of the test environment. We hope that when executing the test, we can specify which environment's configuration to read through the command line option, so as to carry out the test in different test environments. Pytest provides a test called pytest_ The Hook function of addoption can accept the parameters of command line options. The writing method is as follows:
def pytest_addoption(parser): parser.addoption("--env", action="store", dest="environment", default="test", help="environment: test or prod")
pytest_ The meaning of addoption is to receive the value of command-line option – env option and save it in environment variable. If command-line option is not specified, the default value of environment variable is test. Put the above code in conftest.py And modify the env function os.path.join Replace "test" in with request.config.getoption("environment") so that the read configuration file can be controlled through command-line options. For example, to perform the test of the test environment, you can specify - env test:
py.test --env test tests/test_in_theaters.py
If you don't want to specify – env on the command line every time, you can also put it in the pytest.ini Medium:
[pytest] addopts = --env prod
Parameters on the command line override pyest.ini Parameters inside.
08 preparation and end of test
Many times, we need to prepare for database connection and test data before test case execution. After test execution, we need to disconnect database connection and clean up test dirty data. Through section 07, you have a better understanding of how to prepare for the test by using env as the fixture function. This section will introduce more contents.
@ pytest.fixture The possible values of the scope of a function are function, class, module, package or session. Their specific meanings are as follows:
- Function, indicating that the fixture function is executed once before and after the execution of the test method.
- Class, indicating that the fixture function is executed once before and after the execution of the test class.
- module, indicating that the fixture function is executed once before and after the execution of the test script.
- Package, indicating that the fixture function is executed once before the first test case and after the last test case in the test package (folder).
- session, indicating that all tests are executed once at the beginning and after the end of the test.
Generally, database connection and disconnection, reading of test configuration file and other work need to be put into the session level fixture function, because these operations only need to be done once for the entire test activity. The preparation of test data is usually function level or class level, because the test data is often different for different test methods or test classes.
In the testintothers test class, simulate a fixture function preparation to prepare and clean up test data. The scope is set to function:
@pytest.fixture(scope="function") def preparation(self): print("Preparing test data in the database") test_data = "Preparing test data in the database" yield test_data print("Clean up test data")
In the test method, take preparation as a parameter and execute the test by the following command:
$ py.test -s -q --tb=no tests/test_in_theaters.py ====================================================== test session starts ======================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 rootdir: /Users/chunming.liu/learn/api_pytest, inifile: pytest.ini collected 2 items tests/test_in_theaters.py Preparing test data in the database F clean up test data Preparing test data in the database . clean up test data ==================================================== short test summary info ===================================================== FAILED tests/test_ In_ theaters.py ::TestInTheaters::test_ In_ They [start and count in the validation response are consistent with the parameters in the request] - AssertionError: ================================================== 1 failed, 1 passed in 0.81s ===================================================
Through the output, we can see that before and after the execution of each test case, "prepare test data in database" and "clean up test data" are executed once respectively. If the scope value is changed to class, the output information of the test case execution will be as follows:
tests/testintheaters.py Prepare test data in database F. clean up test data perform "prepare test data in database" and "clean up test data" once respectively before and after test class execution.
09 marking and grouping
Through pytest.mark Test cases can be marked. The common application scenario is: for some functions that have not yet been implemented, the test cases are actively skipped and not executed. Or in some cases, the test case will skip execution. There is also the ability to proactively mark test cases as failures, and so on. For three scenarios, pytest provides built-in tags. Let's look at the specific code:
import sys import pytest class TestMarks(object): @pytest.mark.skip(reason="not implementation") def test_the_unknown(self): """ //Skip not execute because the tested logic has not been implemented """ assert 0 @pytest.mark.skipif(sys.version_info < (3, 7), reason="requires python3.7 or higher") def test_skipif(self): """ //Do not execute this test case before Python 3.7 :return: """ assert 1 @pytest.mark.xfail def test_xfail(self): """ Indicate that you expect it to fail //When this case fails, the test result is marked as xfail (expected to fail) and no error message is printed. //When the use case is executed successfully, the test result is marked as xpassed (unexpected passing) """ assert 0 @pytest.mark.xfail(run=False) def test_xfail_not_run(self): """ run=False Indicates that this use case does not need to be executed """ assert 0
Run this test as follows:
$ py.test -s -q --tb=no tests/test_marks.py ====================================================== test session starts ======================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 rootdir: /Users/chunming.liu/learn/api_pytest, inifile: pytest.ini collected 4 items tests/test_marks.py s.xx ============================================ 1 passed, 1 skipped, 2 xfailed in 0.06s =============================================
From the results, we can see that the first test case skipped, the second test case passed, and the third and fourth test cases xfailed.
In addition to the built-in label, you can also customize the label and add it to the test method:
@pytest.mark.slow def test_slow(self): """ Custom label """ assert 0
In this way, you can filter or de filter through - m, such as only executing test cases marked as slow:
$ py.test -s -q --tb=no -m "slow" tests/test_marks.py $ py.test -s -q --tb=no -m "not slow" tests/test_marks.py
For custom tags, in order to avoid PytestUnknownMarkWarning, it is better to pytest.ini To register:
[pytest]
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
10 concurrent execution
If there are thousands of automated test cases, it is a good idea to execute them concurrently, which can speed up the execution time of the whole test case.
pyest has a plug-in, pytest xdist, which can perform concurrent execution. After installation, the execution test case can specify the concurrency degree by executing the - n parameter, and automatically match the number of CPU s through the auto parameter as the concurrency degree. Execute all test cases of this article concurrently:
$ py.test -s -q --tb=no -n auto tests/ ====================================================== test session starts ======================================================= platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 rootdir: /Users/chunming.liu/learn/api_pytest, inifile: pytest.ini plugins: xdist-1.31.0, forked-1.1.3 gw0 [10] / gw1 [10] / gw2 [10] / gw3 [10] / gw4 [10] / gw5 [10] / gw6 [10] / gw7 [10] s.FxxF..F. ==================================================== short test summary info ===================================================== FAILED tests/test_marks.py::TestMarks::test_slow - assert 0 FAILED tests/test_smtpsimple.py::test_ehlo - assert 0 FAILED tests/test_in_theaters.py::TestInTheaters::test_in_theaters[Verification response start and count Consistent with the parameters in the request] - AssertionError: ... ======================================= 3 failed, 4 passed, 1 skipped, 2 xfailed in 1.91s ========================================
It can be very intuitive that concurrent execution is much faster than sequential execution. However, it should be noted that there should be no mutual interference of test data between different test cases. It is better to use different test data for different test cases.
As mentioned here, in pytest ecosystem, there are many third-party plug-ins that are easy to use. More plug-ins can be found here https://pypi.org/search/?q=pytest View and search, of course, we can also develop our own plug-ins.
11 test report
Pytest can easily generate test reports. By specifying the - junitxml parameter, you can generate test reports in XML format. Junitxml is a very common standard test report format, which can be used to integrate with many tools such as continuous integration tools
$ py.test -s -q --junitxml=./report.xml tests/
12 summary
Starting from the actual project, this article introduces how to write test cases, how to parameterize, how to manage test configuration, how to prepare and clean up tests, how to conduct concurrent tests and generate reports. According to the introduction of this article, you can gradually build a complete set of test projects.
So far, the complete directory structure of our automated test project is as follows:
├── config │ └── config.yaml ├── data │ └── testintheaters.yaml ├── env │ ├── bin │ ├── lib │ └── pyvenv.cfg ├── __init__.py ├── tests │ ├── conftest.py │ ├── __init__.py │ ├── pytest.ini │ ├── testintheaters.py │ └── test_marks.py └── utils ├── commonlib.py ├── __init__.py