Interface automation testing requires framework knowledge. How much have you learned?

Posted by dawsba on Tue, 28 Dec 2021 00:04:51 +0100


Why does TestNg become the first choice of Java testing framework? What are you hesitating about? Look at it! We have analyzed the reasons for choosing TestNg from multiple perspectives and also understood the runtime life cycle of TestNg. In this article, we will learn the @ Test annotation and the use of various parameters in detail.

1, Basic use of @ Test annotation

We have created several cases, and @ Test annotation is added to the Test method of each case to identify that the current method is a Test method, and the method with @ Test annotation is the simplest TestNg Test method. Now let's write a basic Test:

@Test

public void test(){

System.out.println("test");

}

After running, you can see the output of the result we want – > test

2, @ Test annotation parameters

Let's enter the Test annotation class to see how the annotation is defined:

@Retention(RetentionPolicy.RUNTIME)

@Target({ElementType.METHOD, ElementType.TYPE, ElementType.CONSTRUCTOR})

public @interface Test {

  String[] groups() default {};

  boolean enabled() default true;

  /** @deprecated */

  @Deprecated

  String[] parameters() default {};

  String[] dependsOnGroups() default {};

  String[] dependsOnMethods() default {};

  long timeOut() default 0L;

  long invocationTimeOut() default 0L;

  int invocationCount() default 1;

  int threadPoolSize() default 0;

  int successPercentage() default 100;

  String dataProvider() default "";

  Class<?> dataProviderClass() default Object.class;

  boolean alwaysRun() default false;

  String description() default "";

  Class[] expectedExceptions() default {};

  String expectedExceptionsMessageRegExp() default ".*";

  String suiteName() default "";

  String testName() default "";

  /** @deprecated */

  boolean sequential() default false;

  boolean singleThreaded() default false;

  Class retryAnalyzer() default Class.class;

  boolean skipFailedInvocations() default false;

  boolean ignoreMissingDependencies() default false;

  int priority() default 0;

  }

You can see that the definition @ Target({ElementType.METHOD, ElementType.TYPE, ElementType.CONSTRUCTOR}) of the Test annotation represents the scope that this annotation can define, that is, it can be used on construction methods, common methods and classes. This annotation defines a large number of parameters and methods. Let's see what these parameters do:

groups

Groups represents groups, that is, the same function or a continuous operation method can be defined as a group, and the runtime can run completely according to the group

enabled

enabled indicates whether to enable the current method. The default value is true, that is, to enable the current test method

parameters

Parameters represents parameters, which can be passed to the test method using the current annotation

dependsOnGroups

dependsOnGroups represents the dependent group, that is, if some dependent methods must be executed before the current method runs, we can set this part of the methods as a group, and this group can be set as a dependent group. When the test runs, the dependent group will be run first, and then the current test method will be run

dependsOnMethods

dependsOnMethods represents the collection of dependent methods, that is, if the current method needs to rely on a method to complete execution or transfer results before running, the dependent methods can be set in, and the test run will run according to the dependency transfer priority

timeOut

timeOut represents the running timeOut of the test method. You can set the corresponding time to test whether the current method can complete the execution correctly within the specified time. The unit is milliseconds

invocationTimeOut

invocationTimeOut, like the previous parameter, sets the timeout of the method, but the difference is that this parameter sets the timeout of the calling method, that is, when another method calls the current method, it must return within the specified time, otherwise it will be regarded as the call timeout

invocationCount

invocationCount represents the number of times the current method is allowed to be called. This parameter can specify the number of times the current test method is called. By default, the value is 1, which means that the current method will only be called once in a run

threadPoolSize

threadPoolSize represents how many threads are opened to run the current test method. This parameter can specify the number of threads in the thread pool to simulate performance test and concurrent test. The default is 0, that is, the main thread is used instead of opening a separate thread

successPercentage

successPercentage represents the success percentage of the current test method. Generally, some tests may not succeed due to the impact of the network or performance during our test. At this time, we can specify this parameter to limit the success percentage of the test

dataProvider

dataProvider is the method name that specifies a particular content provider

dataProviderClass

dataProviderClass specifies the class name of the content provider

alwaysRun

alwaysRun refers to whether the current method will run under any circumstances. If it is specified as true, it means that the method will still try to run even if the method or group it depends on fails to run. The default is false

description

Description represents the description of the current test method

expectedExceptions

expectedExceptions means that the current test method may throw some exceptions. You can use the current parameter to specify specific exceptions and exclude these exceptions. If the excluded exceptions appear, the current test method will still run successfully

expectedExceptionsMessageRegExp

expectedExceptionsMessageRegExp means that by setting this parameter, it can be used to match whether the messages of exceptions in the test method are consistent

suiteName

suiteName refers to the name of the suite to which the current test method belongs when running

 testName

testName refers to the name of the test case specified when the current test method is running

sequential

sequential means that if the current parameter is true, all test methods of the current test class will be executed in the defined order

singleThreaded

If SingleThread is set to true, all methods on this test class are guaranteed to run in the same thread, even if the test is currently running with parameter = "methods". This property can only be used at the class level. If it is used at the method level, it will be ignored. Note: this property was once called order (now deprecated)

retryAnalyzer

retryAnalyzer refers to the test retry mechanism, that is, if the current test method fails, you can specify this parameter. When it fails, it will retry a certain number of times according to the specified value

skipFailedInvocations

skipFailedInvocations refers to whether to skip the failed method and continue running when the method fails. The default value is false

ignoreMissingDependencies

ignoreMissingDependencies refers to whether to continue execution when the specified dependency cannot be found. The default value is false

priority

The priority parameter specifies the priority of the current test method. The lower priority will be run first. The lowest priority is 0. The default priority is 0

3, Common parameter examples

Next, let's take a look at the use of common annotation parameters through some examples

How to report exceptions in a test

In early development, the traditional way to report errors is to use the return code. For example, returning - 1 means running failure. However, this method will have some problems. For example, when the caller gets the return value, it needs a large number of if branches to judge whether the current success or failure is successful, and there may not be an appropriate error code corresponding to it every time, It often leads to the mismatch between the error code and the error. Therefore, in view of this situation, there was a way to obtain specific error information through exception information. This way solves the disadvantages of the previous error code, but what if you deal with test exceptions gracefully in the process of java testing? Suppose there is a demand at this time: book a flight. If the first attempt fails, an exception will be thrown. The processing method in junit3 is:

@Test

public void shouldThrowIfPlaneIsFull() {

Plane plane = createPlane();

plane.bookAllSeats();

try {

plane.bookPlane(createValidItinerary ( ), null);

fail(MThe reservation should have failed");

}

catch(ReservationException ex) {

//success, do nothing: the test will pass

}

}

Try catch is our most common processing method. What if we need to test throwing exceptions as a condition for the success of this test case? Do you rely on try catch every time? There are more elegant processing methods in testNG:

@Test(expectedExceptions = ReservationException.class)

public void shouldThrowIfPlanelsFull() {

     plane plane = createPlane();

     plane. bookAHSeats ();

     plane.bookPlane(createValidItinerary(), null);

}

@Set an expectedExceptions parameter in the Test annotation to mark the exceptions expected to be thrown. If this exception occurs during the run, it will be regarded as the success of our current Test case, which is much more elegant than before. What if all the exceptions I return at this time are runtimeexceptions, but I want to determine whether it is an exception scenario I want to trigger according to msg? At this time, the expectedExceptionsMessageRegExp parameter is required. Just set the fixed return exception information or the corresponding regular expression. After the last returned exception type is matched, the regular matching of the exception information will be carried out. Only the matched exceptions will be regarded as successful.

Multithreading and concurrent running test

The early development mainly used single thread and relied on the hardware performance of the machine to improve the user experience. However, later, multi-core computing power became the mainstream, so almost all programs supported multi-threaded operation. The same java programs often performed well under single thread test and did not have problems, but when there were too many users, they often found unknown problems, How can we simulate test case scenarios under multithreading? Don't worry. testNg adds concurrency module support to prove whether it is thread safe in some scenarios. Let's start with a classic single example:

public class Singleton {
     private static Singleton instance = null;
          public static Singleton getlnstance() {
          if (instance == null) {
          instance = new Singleton();
          }
         return instance;
      }
}

This is a classic single instance writing method. It seems that it can ensure that the Singleton class instance is instantiated only once, but is this really the case? Let's simulate the concurrent test of multithreading:

private Singleton singleton;

@BeforeClass

public void init() {

       singleton = new Singleton();

}

@Test(invocationCount = 100, threadPoolSize = 10)

public void testSingleton(){

      Thread.yield();

      Assert.assertNull(instance);

      Singleton p = singleton.getlnstance();

}

You can see that we have set the invocationCount parameter on the @ Test annotation, indicating that this method has been run 100 times and is set

The threadPoolSize parameter means that 10 threads are started to run this method at the same time (no matter how many threads, the total number of runs is 100). Let's see the results:

=======================================

Concurrent testing

Total tests run: 100, Failures: 5, Skips: 0

=======================================

It can be seen that five of our assertions failed. It can be seen that this single example is indeed unsafe under multithreading

Stability test and reliability test

In the process of testing, we often encounter requirements. For example, the calling time of some interfaces is unstable. We need to test the specific stability or the reliability of the current interface. At this time, we need the timeOut and successPercentage parameters, For example, we have an interface. We must ensure that the interface is called 100 times within 10s, and the success rate of this interface is more than 98%. Otherwise, this interface is unqualified. The test code can be written as follows:

//Test whether the success rate of 100 method calls in 10s exceeds 98%

@Test(timeOut = 10000, invocationCount = 1000,

successPercentage = 98)

public void waitForAnswer() {

     while (1 success) {

          Thread.sleep(1000);

     }

}

Finally, welcome to the official account: the sad hot strip, get a summary of the core knowledge of Python automation test engineer of a 300 page pdf document.

Most of the information in the official account is the knowledge point that the interviewer will ask when interviewed. It also includes many common knowledge in the testing industry, including basic knowledge, Linux essential, Shell, Internet program principle, Mysql database, packet capture tool topics, interface test tools, test advanced -Python programming, Web automation testing, APP automation testing, interface automation testing, Test advanced continuous integration, test architecture development, test framework, performance test, security test, etc.

If you have many puzzles in testing, the software testing technology exchange group I created will be a useful community for you to contact good teachers and friends. Peers may bring you some practical help and breakthroughs. Group: 902061117 you also want to know how your peers are getting rich!

If my blog is helpful to you and you like my blog content, please click "like", "comment" and "collect" for three times!

Recommended by Haowen:

Ali Xiaohei sighs: are more and more young people retreating from the workplace?

Python simple? Let's start with 40 basic interview questions

App public test case sorting

Some insights from a developer to test

Topics: Python Programmer unit testing software testing