Tags: [advanced] [Redis] [ZooKeeper]
1. Ask
What is the difference between redis distributed locks and zk distributed locks?
2. Analysis
This question has high requirements for the interviewer. It should not only understand the implementation method, but also master the principle. So the answer to the question is divided into many levels.
As we all know, Redis boasts lightweight, and intuitively distributed locks are easier to implement. For example, using setnx, but once the high availability attribute is added, the difficulty of Redis lock implementation will explode.
Plus several other attributes of locking: optimism, pessimism, read-write lock, etc., things will be more complicated.
If you know everything, you can't finish talking for a day.
3. Answer
Let's start with a simple and introductory answer:
- redis's distributed lock can be implemented based on the setnx instruction (but in fact, it is more recommended to use the set instruction with nx parameters)
- zk's distributed lock is based on the ordering of temporary nodes and the monitoring mechanism of nodes
This way of answering directly bypasses yourself, because it involves a lot of details. Others just ask the difference. Why do you surround yourself at the source level?
Suggested answer:
- Redis, RedLock encapsulated by redisson
- Zk, InterProcessMutex encapsulated with curator
contrast:
- Implementation difficulty: zookeeper > = redis
- Server performance: redis > zookeeper
- Client performance: zookeeper > redis
- Reliability: zookeeper > redis
Detailed chat:
3.1 implementation difficulty
For directly manipulating the underlying API, the implementation difficulty is almost the same, and many boundary scenarios need to be considered. However, since Zk's ZNode naturally has the attribute of lock, it is very simple to start directly.
Redis needs to consider too many abnormal scenarios, such as lock timeout and high availability of locks, which are difficult to implement.
3.2 server performance
Zk is based on Zab protocol. It requires half of the node acks to be written successfully, and the throughput is low. If locks are frequently added and released, the server cluster will be under great pressure.
Redis is based on memory, and only writing to the Master is successful. It has high throughput and low pressure on the redis server.
3.3 client performance
Zk has a notification mechanism. In the process of obtaining locks, you can add a listener. Polling is avoided and the performance consumption is small.
Redis has no notification mechanism. It can only use the polling method similar to CAS to compete for locks. More idling will put pressure on the client.
3.4 reliability
This is obvious. Zookeeper is born for coordination. It has strict Zab protocol to control the consistency of data, and the lock model is robust.
Redis pursues throughput and is slightly inferior in reliability. Even if Redlock is used, it cannot guarantee 100% robustness, but ordinary applications will not encounter extreme scenarios, so it is also commonly used.
4. Expansion
Example of distributed lock sample code of Zk:
import org.apache.curator.framework.CuratorFramework; import org.apache.curator.framework.recipes.locks.InterProcessMutex; import java.util.concurrent.TimeUnit; public class ExampleClientThatLocks { private final InterProcessMutex lock; private final FakeLimitedResource resource; private final String clientName; public ExampleClientThatLocks(CuratorFramework client, String lockPath, FakeLimitedResource resource, String clientName) { this.resource = resource; this.clientName = clientName; lock = new InterProcessMutex(client, lockPath); } public void doWork(long time, TimeUnit unit) throws Exception { if ( !lock.acquire(time, unit) ) { throw new IllegalStateException(clientName + " could not acquire the lock"); } try { System.out.println(clientName + " has the lock"); resource.use(); } finally { System.out.println(clientName + " releasing the lock"); lock.release(); // always release the lock in a finally block } } }
Distributed lock usage example of RedLock:
String resourceKey = "goodgirl"; RLock lock = redisson.getLock(resourceKey); try { lock.lock(5, TimeUnit.SECONDS); //Real business Thread.sleep(100); } catch (Exception ex) { ex.printStackTrace(); } finally { if (lock.isLocked()) { lock.unlock(); } }
Attached is a code implementation of the internal lock and unlock of RedLock, so that you can have a certain understanding of its complexity.
@Override <T> RFuture<T> tryLockInnerAsync(long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand<T> command) { internalLockLeaseTime = unit.toMillis(leaseTime); return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, command, "local mode = redis.call('hget', KEYS[1], 'mode'); " + "if (mode == false) then " + "redis.call('hset', KEYS[1], 'mode', 'read'); " + "redis.call('hset', KEYS[1], ARGV[2], 1); " + "redis.call('set', KEYS[2] .. ':1', 1); " + "redis.call('pexpire', KEYS[2] .. ':1', ARGV[1]); " + "redis.call('pexpire', KEYS[1], ARGV[1]); " + "return nil; " + "end; " + "if (mode == 'read') or (mode == 'write' and redis.call('hexists', KEYS[1], ARGV[3]) == 1) then " + "local ind = redis.call('hincrby', KEYS[1], ARGV[2], 1); " + "local key = KEYS[2] .. ':' .. ind;" + "redis.call('set', key, 1); " + "redis.call('pexpire', key, ARGV[1]); " + "local remainTime = redis.call('pttl', KEYS[1]); " + "redis.call('pexpire', KEYS[1], math.max(remainTime, ARGV[1])); " + "return nil; " + "end;" + "return redis.call('pttl', KEYS[1]);", Arrays.<Object>asList(getName(), getReadWriteTimeoutNamePrefix(threadId)), internalLockLeaseTime, getLockName(threadId), getWriteLockName(threadId)); } @Override protected RFuture<Boolean> unlockInnerAsync(long threadId) { String timeoutPrefix = getReadWriteTimeoutNamePrefix(threadId); String keyPrefix = getKeyPrefix(threadId, timeoutPrefix); return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN, "local mode = redis.call('hget', KEYS[1], 'mode'); " + "if (mode == false) then " + "redis.call('publish', KEYS[2], ARGV[1]); " + "return 1; " + "end; " + "local lockExists = redis.call('hexists', KEYS[1], ARGV[2]); " + "if (lockExists == 0) then " + "return nil;" + "end; " + "local counter = redis.call('hincrby', KEYS[1], ARGV[2], -1); " + "if (counter == 0) then " + "redis.call('hdel', KEYS[1], ARGV[2]); " + "end;" + "redis.call('del', KEYS[3] .. ':' .. (counter+1)); " + "if (redis.call('hlen', KEYS[1]) > 1) then " + "local maxRemainTime = -3; " + "local keys = redis.call('hkeys', KEYS[1]); " + "for n, key in ipairs(keys) do " + "counter = tonumber(redis.call('hget', KEYS[1], key)); " + "if type(counter) == 'number' then " + "for i=counter, 1, -1 do " + "local remainTime = redis.call('pttl', KEYS[4] .. ':' .. key .. ':rwlock_timeout:' .. i); " + "maxRemainTime = math.max(remainTime, maxRemainTime);" + "end; " + "end; " + "end; " + "if maxRemainTime > 0 then " + "redis.call('pexpire', KEYS[1], maxRemainTime); " + "return 0; " + "end;" + "if mode == 'write' then " + "return 0;" + "end; " + "end; " + "redis.call('del', KEYS[1]); " + "redis.call('publish', KEYS[2], ARGV[1]); " + "return 1; ", Arrays.<Object>asList(getName(), getChannelName(), timeoutPrefix, keyPrefix), LockPubSub.UNLOCK_MESSAGE, getLockName(threadId)); }
Therefore, it is recommended to use encapsulated components. If you have to use setnx or set instructions to do these things, xjjdog you can only say that you want to be abused. We can understand the basic principle. These details cannot be sorted out without a little effort.
After talking about this for a long time, what should we do when selecting models? It depends on your infrastructure. If zk is used in your application and the cluster performance is very strong, zk is preferred. If you only have redis and don't want to introduce a bloated zk for a distributed lock, use redis.