Preparation for interview I: split Okhttp

Posted by eyedol on Sat, 19 Feb 2022 16:45:51 +0100

okhttp can be said to be the most frequently asked third-party framework in the interview. It should be said that there is no one. It's just time to review and split this third-party library while looking for a job.

Part I: design patterns used

1. Builder mode: such as OkHttpClient and Request. Why use the builder model? The use feature of builder mode is that if a class has multiple (three bar) constructors and some parameters are not required, it is suitable to use builder mode. It happens that all parameters of these two classes have default values (at least not required);

2. Static factory mode: requestbody create(); Needless to say, it's simple.

3. Responsibility chain model: what is the responsibility chain model? First of all, if it is called in a chain, I want to call the next one to form a chain. Therefore, the last call of the interceptor must be written by the system by default. We can write our own interceptor insertion node in the middle according to the template.

4. Producer consumer model: Dispatcher responsibilities. When we create a request, we are the producer, the caller takes out the request from the thread pool and processes it online, which is the consumer, and the thread pool (queue) is the buffer pool.

Part II: how to use

        1.establish OkHttpClient 
        OkHttpClient client = new OkHttpClient.Builder()
                .readTimeout(50000, TimeUnit.MILLISECONDS)
                .readTimeout(50000, TimeUnit.MILLISECONDS)
        2.establish Request 
        Request request = new Request.Builder()
                .post(RequestBody.create(MediaType.parse(""), ""))
        3.Send request
        client.newCall(request).enqueue(new Callback() {
            public void onFailure(Call call, IOException e) {


            public void onResponse(Call call, Response response) throws IOException {


Part III: analyze and split the source code one by one

1. Create OkHttpClient. The builder mode is created. You can configure the required parameters at will. Officially, it is recommended that there is only one such class object in the world, that is, it is recommended to set it to static. Later analysis.

 * <p>OkHttp performs best when you create a single {@code OkHttpClient} instance and reuse it for
 * all of your HTTP calls. This is because each client holds its own connection pool and thread
 * pools. Reusing connections and threads reduces latency and saves memory. Conversely, creating a
 * client for each request wastes resources on idle pools.

2. Create a Request, create a Request, and the builder mode is to set the url and header. There's nothing to say.

3. Sending a request requires a little splitting.


Go in and have a look

  @Override public Call newCall(Request request) {
    return new RealCall(this, request, false /* for web socket */);

A real request class realcall is created, that is, the following client newCall(request). The second half of enqueue () is actually realcall Enqueue(), let's see

//In realcall  
@Override public void enqueue(Callback responseCallback) {
    client.dispatcher().enqueue(new AsyncCall(responseCallback));

There are two points in the core code: the first client comes on stage and inserts the task with its scheduler. We will say later that it is OK to know that it is a thread pool first. Second, an AsyncCall class is created here. This class is the internal class of RealCall. The internal class is characterized by holding references to external classes, so all parameters in RealCall can be used directly. Take a look at this class:

final class AsyncCall extends NamedRunnable {
    @Override protected void execute() {
      //Not need it for now...

For the time being, we only need to look at two points. The first point is the internal class of RealCall, which is a subclass of NamedRunnable, that is, it is a task. Then, since it is a task, we will call the run method when we put the task into the thread pool above, and the run method calls the execute method without doing anything Let's focus on this method:

    @Override protected void execute() {
      try {
        Response response = getResponseWithInterceptorChain();//request
        if (retryAndFollowUpInterceptor.isCanceled()) {//Failed (canceled)
          responseCallback.onFailure(RealCall.this, new IOException("Canceled"));
        } else {
          responseCallback.onResponse(RealCall.this, response);//success
      } catch (IOException e) {
          responseCallback.onFailure(RealCall.this, e);//fail
      } finally {
        client.dispatcher().finished(this);//End task

Send request (there are various interceptors here) -- success, failure, cancellation -- end the task (client. Dispatcher() Finished (this) method, which will be used by the client later). No, it's that simple. Here is how to send the request and get the data:

Response getResponseWithInterceptorChain() throws IOException {
    // Build a full stack of interceptors.
    List<Interceptor> interceptors = new ArrayList<>();
    interceptors.add(new BridgeInterceptor(client.cookieJar()));
    interceptors.add(new CacheInterceptor(client.internalCache()));
    interceptors.add(new ConnectInterceptor(client));
    if (!forWebSocket) {
    interceptors.add(new CallServerInterceptor(forWebSocket));

    Interceptor.Chain chain = new RealInterceptorChain(
        interceptors, null, null, null, 0, originalRequest);
    return chain.proceed(originalRequest);

First, we added a lot of interceptors. First, we can add our own interceptors at the beginning or before sending them to the network later. The second is the processed method of RealInterceptorChain:

public Response proceed(Request request, StreamAllocation streamAllocation, HttpCodec httpCodec,
      Connection connection) throws IOException {
    //. . . 
    // Call the next interceptor in the chain.
    RealInterceptorChain next = new RealInterceptorChain(
        interceptors, streamAllocation, httpCodec, connection, index + 1, request);
    Interceptor interceptor = interceptors.get(index);
    Response response = interceptor.intercept(next);

    return response;

Omit the code, do some checks, and then start executing interceptors one by one in the responsibility chain Finally, analyze the interceptor. At this point, we have successfully connected to the Internet and obtained the data.

4. Having finished the networking process above, we should focus on two neglected details.

   4.1)client.dispatcher(), that is, how the dispatcher class executes our networking requests. Here is the producer consumer model.

public final class Dispatcher {
  //Maximum requests 64
  private int maxRequests = 64;
  //The maximum number of domain names is 5
  private int maxRequestsPerHost = 5;
  private Runnable idleCallback;

  /** Executes calls. Created lazily. */
  private ExecutorService executorService;

  /** Ready async calls in the order they'll be run. */
  private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>();

  /** Running asynchronous calls. Includes canceled calls that haven't finished yet. */
  private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>();

  /** Running synchronous calls. Includes canceled calls that haven't finished yet. */
  private final Deque<RealCall> runningSyncCalls = new ArrayDeque<>();

  public Dispatcher(ExecutorService executorService) {
    this.executorService = executorService;

  public Dispatcher() {

  public synchronized ExecutorService executorService() {
    if (executorService == null) {
      executorService = new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60, TimeUnit.SECONDS,
          new SynchronousQueue<Runnable>(), Util.threadFactory("OkHttp Dispatcher", false));
    return executorService;
//... Omit first

It is very important to write parameters in this class. Please explain:

maxRequests - maximum number of requests 64

maxRequestsPerHost: the maximum requested domain name is 5

readyAsyncCalls is a queue waiting for asynchronous execution. There is no size limit

runningAsyncCalls is an executing asynchronous request queue with no size limit

runningSyncCalls is the queue of synchronization requests being executed. There is no size limit

The number of non core threads is 60s, the maximum number of non core threads is 60s, and the execution time of non core threads is 0.

Through the above parameter configuration, we can know that the thread pool size is not limited, and the request is processed when it comes. That is, the maxRequests variable controls the maximum number of networking, 64. If it exceeds the limit, the request will be placed in the above asynchronous waiting queue. Of course, these parameters can be modified. The maximum default value is 64. You can see below

  client.dispatcher().enqueue(new AsyncCall(responseCallback)); Enqueue method in
synchronized void enqueue(AsyncCall call) {
    if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) {
    } else {

If the number of network requests being executed is less than 64 and the total number of domain names is less than 5, it will be executed directly, otherwise it will be added to the waiting queue. When does the waiting queue execute? Of course, it is used at the end of realcall networking. There is a line "client" dispatcher(). Finished (this) code, which is called after the end of realcall.

  /** Used by {@code AsyncCall#run} to signal completion. */
  void finished(AsyncCall call) {
    finished(runningAsyncCalls, call, true);

  //The second parameter is true
  private <T> void finished(Deque<T> calls, T call, boolean promoteCalls) {
    int runningCallsCount;
    synchronized (this) {
      if (!calls.remove(call)) throw new AssertionError("Call wasn't in-flight!");
      if (promoteCalls) promoteCalls();

  private void promoteCalls() {
    if (runningAsyncCalls.size() >= maxRequests) return; // Already running max capacity.
    if (readyAsyncCalls.isEmpty()) return; // No ready calls to promote.

    for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext(); ) {
      AsyncCall call =;

      if (runningCallsForHost(call) < maxRequestsPerHost) {

      if (runningAsyncCalls.size() >= maxRequests) return; // Reached max capacity.

In fact, the call relationship has been annotated in the source code. The finished method calls the overloaded method. First, remove the current request from the request queue, and then execute the promoteCalls # method. This method makes some judgments, such as whether there are still requests and how many restrictions on the domain name, and then take the last request from the waiting queue and hand it to the thread pool for execution, Also added to the executing queue. At this point, the scheduler analysis is completed.

4.2) there is another place above that has not been carefully analyzed, that is, after a series of interceptors, we get the data. How do we connect to the Internet? Let's analyze the interceptor ConnectInterceptor. Other interceptors are similar, but have different responsibilities.


public final class ConnectInterceptor implements Interceptor {
  public final OkHttpClient client;

  public ConnectInterceptor(OkHttpClient client) {
    this.client = client;

  @Override public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Request request = realChain.request();
    StreamAllocation streamAllocation = realChain.streamAllocation();

    // We need the network to satisfy this request. Possibly for validating a conditional GET.
    boolean doExtensiveHealthChecks = !request.method().equals("GET");
    HttpCodec httpCodec = streamAllocation.newStream(client, doExtensiveHealthChecks);
    RealConnection connection = streamAllocation.connection();

    return realChain.proceed(request, streamAllocation, httpCodec, connection);

It's so simple. A constructor, a method.

Topics: OkHttp Interview