Node.js is not suitable for processing time-consuming operations, which has always been a problem. Therefore, node JS provides three solutions.
1 subprocess
2 sub thread
3 Libuv thread pool
The first two are more efficient because we only need to write js. But there are also some disadvantages
1. Cost of executing js
2. Although Libuv thread pool can be used indirectly, it is limited by node JS.
3 unable to take advantage of the solutions provided by the c/c + + layer (built-in or industry).
At this time, we can try the third solution. Libuv thread pool is used directly through n-API. Let's take a look at this. N-API provides several APIs.
napi_create_async_work // Create a worr, but it hasn't been executed yet napi_delete_async_work // Release the memory of the work created above napi_queue_async_work // Submit a work to Libuv napi_cancel_async_work // Cancel the task in Libuv. If it is already executing, it cannot be cancelled
Next, let's look at how to use Libuv thread pool through N-API. First look at the js layer.
const { submitWork } = require('./build/Release/test.node'); submitWork((sum) => { console.log(sum) })
js submits a task and then passes in a callback. Then look at the code of N-API.
napi_value Init(napi_env env, napi_value exports) { napi_value func; napi_create_function(env, NULL, NAPI_AUTO_LENGTH, submitWork, NULL, &func); napi_set_named_property(env, exports, "submitWork", func); return exports; }
NAPI_MODULE(NODE_GYP_MODULE_NAME, Init)
First define the exported function, and then look at the core logic.
1 define a structure save context
struct info { int sum; // Save calculation results napi_ref func; // Save callback napi_async_work worker; // Save work object };
2 submit the task to Libuv
static napi_value submitWork(napi_env env, napi_callback_info info) { napi_value resource_name; napi_status status; size_t argc = 1; napi_value args[1]; struct info data = {0, nullptr, nullptr}; struct info * ptr = &data; status = napi_get_cb_info(env, info, &argc, args, NULL, NULL); if (status != napi_ok) { goto done; } napi_create_reference(env, args[0], 1, &ptr->func); status = napi_create_string_utf8(env,"test", NAPI_AUTO_LENGTH, &resource_name); if (status != napi_ok) { goto done; } // Create a work, and the context saved by ptr will be used in the work function and the done function status = napi_create_async_work(env, nullptr, resource_name, work, done, (void *) ptr, &ptr->worker); if (status != napi_ok) { goto done; } // Mention work to Libuv status = napi_queue_async_work(env, ptr->worker); done: napi_value ret; napi_create_int32(env, status == napi_ok ? 0 : -1, &ret); return ret; }
Execute the above function, and the task will be submitted to the Libuv thread pool.
3 Libuv sub thread execution task
void work(napi_env env, void* data) { struct info *arg = (struct info *)data; printf("doing...\n"); int sum = 0; for (int i = 0; i < 10; i++) { sum += i; } arg->sum = sum; }
Very simple, calculate several numbers. And save the results.
4 callback js
void done(napi_env env, napi_status status, void* data) { struct info *arg = (struct info *)data; if (status == napi_cancelled) { printf("cancel..."); } else if (status == napi_ok) { printf("done...\n"); napi_value callback; napi_value global; napi_value result; napi_value sum; // Get the results napi_create_int32(env, arg->sum, &sum); napi_get_reference_value(env, arg->func, &callback); napi_get_global(env, &global); // Callback js napi_call_function(env, global, callback, 1, &sum, &result); // clear napi_delete_reference(env, arg->func); napi_delete_async_work(env, arg->worker); } }
And after execution, we see the output of 45. Next, we analyze the general process. First look at ThreadPoolWork
ThreadPoolWork is the encapsulation of Libuv work.
class ThreadPoolWork { public: explicit inline ThreadPoolWork(Environment* env) : env_(env) { CHECK_NOT_NULL(env); } inline virtual ~ThreadPoolWork() = default; inline void ScheduleWork(); inline int CancelWork(); virtual void DoThreadPoolWork() = 0; virtual void AfterThreadPoolWork(int status) = 0; Environment* env() const { return env_; } private: Environment* env_; uv_work_t work_req_; };
The definition of class is very simple. It mainly encapsulates uv_work_t. Let's look at the meaning of each function. DoThreadPoolWork and AfterThreadPoolWork are virtual functions implemented by subclasses. We will analyze them later when we look at subclasses. Let's look at ScheduleWork
void ThreadPoolWork::ScheduleWork() { env_->IncreaseWaitingRequestCounter(); int status = uv_queue_work( env_->event_loop(), &work_req_, // Task function executed in Libuv sub thread [](uv_work_t* req) { ThreadPoolWork* self = ContainerOf(&ThreadPoolWork::work_req_, req); self->DoThreadPoolWork(); }, // Callback after task processing [](uv_work_t* req, int status) { ThreadPoolWork* self = ContainerOf(&ThreadPoolWork::work_req_, req); self->env_->DecreaseWaitingRequestCounter(); self->AfterThreadPoolWork(status); }); CHECK_EQ(status, 0); }
ScheduleWork is a function responsible for submitting tasks to Libuv. Then look at CancelWork.
int ThreadPoolWork::CancelWork() { return uv_cancel(reinterpret_cast<uv_req_t*>(&work_req_)); }
Directly call Libuv's function to cancel the task. After reading the parent class, let's look at the definition of subclasses, which are implemented in N-API.
class Work : public node::AsyncResource, public node::ThreadPoolWork { private: explicit Work(node_napi_env env, v8::Local<v8::Object> async_resource, v8::Local<v8::String> async_resource_name, napi_async_execute_callback execute, napi_async_complete_callback complete = nullptr, void* data = nullptr) : AsyncResource(env->isolate, async_resource, *v8::String::Utf8Value(env->isolate, async_resource_name)), ThreadPoolWork(env->node_env()), _env(env), _data(data), _execute(execute), _complete(complete) { } ~Work() override = default; public: static Work* New(node_napi_env env, v8::Local<v8::Object> async_resource, v8::Local<v8::String> async_resource_name, napi_async_execute_callback execute, napi_async_complete_callback complete, void* data) { return new Work(env, async_resource, async_resource_name, execute, complete, data); } // Free the memory of this kind of object static void Delete(Work* work) { delete work; } // Execute user set functions void DoThreadPoolWork() override { _execute(_env, _data); } void AfterThreadPoolWork(int status) override { // Execute user set callback _complete(env, ConvertUVErrorCode(status), _data); } private: node_napi_env _env; // The data set by the user is used to save the execution results, etc void* _data; // Functions that perform tasks napi_async_execute_callback _execute; // Callback after task processing napi_async_complete_callback _complete; };
In the Work class, we see the implementation of virtual functions DoThreadPoolWork and AfterThreadPoolWork without much logic. Finally, let's look at the implementation of the API provided by N-API.
1 napi_create_async_work
napi_status napi_create_async_work(napi_env env, napi_value async_resource, napi_value async_resource_name, napi_async_execute_callback execute, napi_async_complete_callback complete, void* data, napi_async_work* result) { v8::Local<v8::Context> context = env->context(); v8::Local<v8::Object> resource; if (async_resource != nullptr) { CHECK_TO_OBJECT(env, context, resource, async_resource); } else { resource = v8::Object::New(env->isolate); } v8::Local<v8::String> resource_name; CHECK_TO_STRING(env, context, resource_name, async_resource_name); uvimpl::Work* work = uvimpl::Work::New(reinterpret_cast<node_napi_env>(env), resource, resource_name, execute, complete, data); *result = reinterpret_cast<napi_async_work>(work); return napi_clear_last_error(env); }
napi_create_async_work is essentially a simple encapsulation of work. Create a work and return it to the user.
2 napi_delete_async_work
napi_status napi_delete_async_work(napi_env env, napi_async_work work) { CHECK_ENV(env); CHECK_ARG(env, work); uvimpl::Work::Delete(reinterpret_cast<uvimpl::Work*>(work)); return napi_clear_last_error(env); }
napi_delete_async_work is used to release the memory corresponding to work after the task is executed.
3 napi_queue_async_work
napi_status napi_queue_async_work(napi_env env, napi_async_work work) { CHECK_ENV(env); CHECK_ARG(env, work); napi_status status; uv_loop_t* event_loop = nullptr; status = napi_get_uv_event_loop(env, &event_loop); if (status != napi_ok) return napi_set_last_error(env, status); uvimpl::Work* w = reinterpret_cast<uvimpl::Work*>(work); w->ScheduleWork(); return napi_clear_last_error(env); }
napi_queue_async_work is the encapsulation of ScheduleWork, which is used to submit tasks to Libuv thread pool.
4 napi_cancel_async_work
napi_status napi_cancel_async_work(napi_env env, napi_async_work work) { CHECK_ENV(env); CHECK_ARG(env, work); uvimpl::Work* w = reinterpret_cast<uvimpl::Work*>(work); CALL_UV(env, w->CancelWork()); return napi_clear_last_error(env); }
napi_cancel_async_work is the encapsulation of CancelWork, that is, canceling the task of Libuv thread pool. We see a set of layers without too much logic. It is mainly to comply with the specification of N-API.
Summary: the API provided by N-API makes us no longer limited to nod JS itself provides some asynchronous interfaces (using the interface of Libuv thread pool), but directly uses Libuv thread pool, so that we can not only write c/c + + ourselves, but also reuse some industry solutions node Some time-consuming tasks in JS.
Warehouse: https://github.com/theanarkh/learn-to-write-nodejs-addons