AccessibilityService adds gesturedescription using detailed, 7.0 simulation gesture to grab red envelope core code analysis

Posted by Garrett on Wed, 29 May 2019 19:09:07 +0200

At present, most of the plug-ins for grabbing the red envelope from Wechat are invalid. The most critical reason is that at 7.0 we can not get the View through Find Accessibility Node InfosByText or Find Accessibility Node InfosByViewId. The main reason is that the system has been implemented in a new way. This is what we want to share. Although we can't get the View, we can use it through Find Accessibility Node InfosByViewId or Find Accessibility Node InfosByViewId. Simulated gestures can also be used to remove red envelopes by clicking
The next step is the step-by-step analysis.
First is the interpretation of the dispatchGesture method.

boolean dispatchGesture (GestureDescription gesture, 
                AccessibilityService.GestureResultCallback callback, 
                Handler handler)

This is the method we need to call to simulate gestures.
This method has three parameters:

  1. Gesture Description: The translation is the description of gestures. If you want to achieve simulation, first of all, describe the gestures of your waist simulation.
  2. Parameter Gesture Result Callback: Translated to the callback of gesture, the result of the callback after the gesture simulation is executed
  3. Parametric handler: In most cases, we don't need to pass empty words.
    In general, we should pay attention to the parameter Gesture Description, which will be highlighted below.

Gesture Description Official Description:

Accessibility services with the AccessibilityService_canPerformGestures property can dispatch gestures. This class describes those gestures. Gestures are made up of one or more strokes. Gestures are immutable once built. 
Spatial dimensions throughout are in screen pixels. Time is measured in milliseconds.

What does it mean?

Auxiliary function service, which has the AccessibilityService_canPerformGestures attribute, can send gestures. Such descriptive gestures. Gestures consist of one or more strokes. Once generated, gestures are immutable.
The whole space size is in screen pixels. Time is in milliseconds.

Construct a key code for gesture description:

GestureDescription.StrokeDescription(Path path, long startTime, long duration)

For example:

val builder = GestureDescription.Builder()
val gestureDescription = builder
                    .addStroke(GestureDescription.StrokeDescription(path, 100, 50))
                    .build()

The parameters are described as follows:

  • Parametric path: stroke path
  • Parameter startTime: Time (in milliseconds), time from gesture start to stroke start, non-negative
  • Parametric duration: The duration of the stroke through the path (in milliseconds), non-negative

Here we focus on the path, description of the official website:

ath: The path to follow. Must have exactly one contour. The bounds of the path must not be negative. The path must not be empty. If the path has zero length (for example, a single moveTo()), the stroke is a touch that doesn't move.
This value must never be null.

Translated roughly means:

Path: The path to follow. There must be a precise outline. The boundary of the path must not be negative. The path must not be empty. If the length of the path is zero (for example, a single moveTo()), the stroke is a click that does not move.

That's very clear. If you want to simulate stand-alone events, just like this:

val path = Path()
path.moveTo(x, y)

Note: where x and y represent the coordinates of the button to be clicked relative to the upper left corner of the screen
If you want to simulate the rolling gesture, you can do this:

val path = Path()
path.moveTo(x1, y1)
path.lineTo(x2, y2)

It means that the simulation scrolls from the first point to the second point, and other more complex gestures can be simulated by path.
This is the end of the introduction of the problem of simulated gestures. If you have any questions, please leave a message.

Topics: Attribute