summary
Project address: https://github.com/snower/slock
What is a state and atomic operations database? Unlike redis, which is mainly used to save data and can efficiently synchronize data between multiple nodes and systems, slock is designed to only save synchronization status and can hardly carry data. The high-performance asynchronous binary protocol also ensures that the waiting system can be triggered efficiently when the status is reached. Different from the expiration time of redis passive check, the wait timeout time and lock expiration time of slock are accurately and actively triggered. The multi-core support and simpler system structure also ensure that it has much higher performance and delay than redis, which is also more in line with the requirements of higher performance and lower delay in the requirements of state synchronization.
Why is second kill difficult? The problem is that we need to complete a large number of invalid requests in a very short time, with only a few valid request processing. Further simplification is to complete the process of ultra-high and send state synchronization between a large number of requests. slock high QPS can quickly solve the problem of filtering a large number of invalid requests, and high-performance atomic operation can well solve the logic of robbing inventory.
With the use of nodejs, the frameworks related to asynchronous IO are becoming more and more mature and more convenient to use. In the multi-threaded synchronous IO mode, we often need to convert some scenarios into queue processing and then push the results. However, asynchronous IO does not need to be so complex. It is OK to directly add distributed locks and wait for availability. The whole process completely returns to the logic of single-machine multi-threaded programming, It is simpler and easier to understand and maintain. For example, a single request requires a lot of operations. At high parallel delivery, it may need to be sent to the queue for processing and then push the results. However, if you use the distributed lock of asynchronous IO, you can see that the locking of asynchronous IO actually forms a larger distributed queue, which greatly simplifies the implementation steps.
characteristic
- Ultra high performance, more than 2 million QPS on Intel i5-4590
- High performance binary asynchronous protocol is simple, stable and reliable, and redis synchronous text protocol can also be used
- Multi core multithreading support
Level 4 AOF persistence
- Direct return without persistence
- Return after persistence after expiration percentage is exceeded
- Return after AOF time persistence
- Return immediately after asynchronous persistence
- Return after the active nodes of the whole cluster are successfully and persistent
- High availability cluster mode, automatic migration and automatic agent
- Accurate to milliseconds, seconds, minutes timeout and expiration time, timeout and expiration time can be subscribed separately
- Multiple lock support, reentry lock support
- Last words command
Scenario example
Distributed lock
The whole protocol has only two instructions, Lock and Unlock. Distributed locks are the most commonly used scenarios. In addition to better performance and lower delay, the distributed locks implemented by redis are accurately and actively triggered when the waiting timeout and locking timeout expire. Therefore, there is a wait mechanism. The distributed locks implemented by redis generally need the client to actively delay and retry to check.
package main; import io.github.snower.jaslock.Client; import io.github.snower.jaslock.Event; import io.github.snower.jaslock.Lock; import io.github.snower.jaslock.ReplsetClient; import io.github.snower.jaslock.exceptions.SlockException; import java.io.IOException; import java.nio.charset.StandardCharsets; public class App { public static void main(String[] args) { ReplsetClient replsetClient = new ReplsetClient(new String[]{"172.27.214.150:5658"}); try { replsetClient.open(); Lock lock = replsetClient.newLock("test".getBytes(StandardCharsets.UTF_8), 5, 5); lock.acquire(); lock.release(); } catch (SlockException e) { e.printStackTrace(); } finally { replsetClient.close(); } } }
Nginx & openresty current limiting
openresty uses this service to complete current limiting, which can easily complete cross node. At the same time, because of the use of high-performance asynchronous binary protocol, each work only needs one connection to the server, and the high-speed distribution will not cause the problem of exhaustion of internal connections. When the server master node is changed, the work can automatically use the new available master node to achieve high availability.
Maximum concurrent current limit
Each key can set the maximum locking times. Using this logic, the maximum concurrent current limit can be easily realized.
lua_package_path "lib/resty/slock.lua;"; init_worker_by_lua_block { local slock = require("slock") slock:connect("lock1", "127.0.0.1", 5658) } server { listen 80; location /flow/maxconcurrent { access_by_lua_block { local slock = require("slock") local client = slock:get("lock1") local flow_key = "flow:maxconcurrent" local args = ngx.req.get_uri_args() for key, val in pairs(args) do if key == "flow_key" then flow_key = val end end local lock = client:newMaxConcurrentFlow(flow_key, 10, 5, 60) local ok, err = lock:acquire() if not ok then ngx.say("acquire error:" .. err) ngx.exit(ngx.HTTP_OK) else ngx.ctx.lock1 = lock end } echo "hello world"; log_by_lua_block { local lock = ngx.ctx.lock1 if lock ~= nil then local ok, err = lock:release() if not ok then ngx.log(ngx.ERR, "slock release error:" .. err) end end } } }
Token bucket current limiting
Each key can set the maximum locking times and expire when the token expires, so as to limit the token bucket flow. When the millisecond expiration time is used, peak shaving can also be completed to balance the traffic.
lua_package_path "lib/resty/?.lua;"; init_worker_by_lua_block { local slock = require("slock") slock:connect("lock1", "127.0.0.1", 5658) } server { listen 80; location /flow/tokenbucket { access_by_lua_block { local slock = require("slock") local client = slock:get("lock1") local flow_key = "flow:tokenbucket" local args = ngx.req.get_uri_args() for key, val in pairs(args) do if key == "flow_key" then flow_key = val end end local lock = client:newTokenBucketFlow(flow_key, 10, 5, 60) local ok, err = lock:acquire() if not ok then ngx.say("acquire error:" .. err) ngx.exit(ngx.HTTP_OK) end } echo "hello world"; } }
Other available scenarios
- Distributed Event is a common scenario, such as code scanning login. The QR code side needs to wait for code scanning status.
- Distributed Semaphore, which is a more general current limiting, can also be used for asynchronous task result notification.
- Distributed read-write lock.
- Second kill scenario. Second kill scenario is a typical scenario with a high number of requests but few effective requests. The atomic operation feature can well support the logic of inventory grabbing, and the ultra-high concurrency support can also well solve the problem of invalid requests.
- Asynchronous result notification: the function directly realized by the web page needs to be executed by the background scheduled task. At this time, the network can also call the asynchronous task, and then wait for the execution to be completed through the distributed Event.
In the above usage scenarios, the external interface can be completed in openresty, and then the trigger can be completed by the internal system. The high performance and high concurrency of openresty can easily solve many previous needs of long connection push with queues.