Author: Unmesh Joshi
Translator: java talent
Source: https://martinfowler.com/articles/patterns-of-distributed-systems/
Notify clients when a specific value on the server changes
problem
The client pays attention to changes in specific values on the server. If a customer needs to constantly poll the server for changes, it is difficult to construct its logic. If the client opens too many server connections to monitor changes, it may overburden the server.
Solution
Allows clients to register their concerns with the server for specific state changes. When the state changes, the server notifies the concerned clients. The client and server maintain a single socket channel. The server sends status change notifications on this channel. Clients may be interested in multiple values, but maintaining each monitored connection may overburden the server. Therefore, customers can use the request pipeline.
Consider an example of a simple key value store used in the Consistent Core: clients may be concerned when the value of a particular key changes or a key is deleted. The implementation consists of two parts: client-side implementation and server-side implementation.
Client implementation
The client will receive the key and function. When the client gets the monitoring event from the server, the function will be called and the client will store the method object for later call. It then sends a request to the server to register the monitor.
ConcurrentHashMap<String, Consumer<WatchEvent>> watches = new ConcurrentHashMap<>(); public void watch(String key, Consumer<WatchEvent> consumer) { watches.put(key, consumer); sendWatchRequest(key); } private void sendWatchRequest(String key) { requestSendingQueue.submit(new RequestOrResponse(RequestId.WatchRequest.getId(), JsonSerDes.serialize(new WatchRequest(key)), correlationId.getAndIncrement())); }
When a monitor event is received on the connection, the corresponding consumer triggers the call
this.pipelinedConnection = new PipelinedConnection(address, requestTimeoutMs, (r) -> { logger.info("Received response on the pipelined connection " + r); if (r.getRequestId() == RequestId.WatchRequest.getId()) { WatchEvent watchEvent = JsonSerDes.deserialize(r.getMessageBodyJson(), WatchEvent.class); Consumer<WatchEvent> watchEventConsumer = getConsumer(watchEvent.getKey()); watchEventConsumer.accept(watchEvent); lastWatchedEventIndex = watchEvent.getIndex(); //capture last watched index, in case of connection failure. } completeRequestFutures(r); });
Server implementation
When the server receives a monitor registration request, it will retain the pipeline connection that receives the request and the mapping of keys.
private Map<String, ClientConnection> watches = new HashMap<>(); private Map<ClientConnection, List<String>> connection2WatchKeys = new HashMap<>(); public void watch(String key, ClientConnection clientConnection) { logger.info("Setting watch for " + key); addWatch(key, clientConnection); } private synchronized void addWatch(String key, ClientConnection clientConnection) { mapWatchKey2Connection(key, clientConnection); watches.put(key, clientConnection); } private void mapWatchKey2Connection(String key, ClientConnection clientConnection) { List<String> keys = connection2WatchKeys.get(clientConnection); if (keys == null) { keys = new ArrayList<>(); connection2WatchKeys.put(clientConnection, keys); } keys.add(key); }
ClientConnection wraps the socket that connects to the client. It has the following structure. This structure is the same for blocking IO based servers and non blocking IO based servers.
public interface ClientConnection { void write(RequestOrResponse response); void close(); }
Multiple listeners can be registered on a single connection. Therefore, it is important to store the mapping from the connection to the monitor key list. When closing the client connection, you need to use it to delete all associated monitors, as shown below:
public void close(ClientConnection connection) { removeWatches(connection); } private synchronized void removeWatches(ClientConnection clientConnection) { List<String> watchedKeys = connection2WatchKeys.remove(clientConnection); if (watchedKeys == null) { return; } for (String key : watchedKeys) { watches.remove(key); } }
Using reactive streams( https://www.reactive-streams.org/ ) The example here shows writing events directly to a pipe connection. It is useful to have some type of back pressure at the application level. If many events are generated, it is very important to control the sending rate of events. Keeping producers and consumers of events in sync is an important consideration. An example of this problem in etcd( https://github.com/etcd-io/etcd/issues/11906 )Explain that these precautions are very important in production. [reactive streams] uses back pressure as the primary concept to make code easier to write. Protocols like rsocket provide a structured way to do this.
When a specific event such as setting a value for a key occurs on the server, the server will notify all registered clients by constructing a related WatchEvent
private synchronized void notifyWatchers(SetValueCommand setValueCommand, Long entryId) { if (!hasWatchesFor(setValueCommand.getKey())) { return; } String watchedKey = setValueCommand.getKey(); WatchEvent watchEvent = new WatchEvent(watchedKey, setValueCommand.getValue(), EventType.KEY_ADDED, entryId); notify(watchEvent, watchedKey); } private void notify(WatchEvent watchEvent, String watchedKey) { List<ClientConnection> watches = getAllWatchersFor(watchedKey); for (ClientConnection pipelinedClientConnection : watches) { try { String serializedEvent = JsonSerDes.serialize(watchEvent); getLogger().trace("Notifying watcher of event " + watchEvent + " from " + server.getServerId()); pipelinedClientConnection .write(new RequestOrResponse(RequestId.WatchRequest.getId(), serializedEvent)); } catch (NetworkException e) { removeWatches(pipelinedClientConnection); //remove watch if network connection fails. } } }
One of the key things to note is that the monitoring related state can be accessed concurrently from the client request processing code and the client connection processing code (closing the connection). Therefore, all methods of accessing the monitoring state need to be protected with locks.
Monitoring in hierarchical storage
Consistent Core mainly supports tiered storage. You can set the monitor on the prefix of the parent node or key. Any change to a child node triggers the monitor set up on the parent node. For each event, the Consistent Core traverses the path to check whether there are monitor settings on the parent path and sends events to all these monitors.
List<ClientConnection> getAllWatchersFor(String key) { List<ClientConnection> affectedWatches = new ArrayList<>(); String[] paths = key.split("/"); String currentPath = paths[0]; addWatch(currentPath, affectedWatches); for (int i = 1; i < paths.length; i++) { currentPath = currentPath + "/" + paths[i]; addWatch(currentPath, affectedWatches); } return affectedWatches; } private void addWatch(String currentPath, List<ClientConnection> affectedWatches) { ClientConnection clientConnection = watches.get(currentPath); if (clientConnection != null) { affectedWatches.add(clientConnection); } }
This allows the monitor to be set on the key prefix (e.g. "servers"). Any key created with this prefix (e.g. "servers / 1", "servers / 2") will trigger the monitor.
Since the mapping of the function to be called is stored using the key prefix, it is also important to traverse the hierarchy to find the function to be called when the client receives the event. An alternative is to send the path triggered by the event with the event so that the client knows the monitor that caused the event to be sent.
Failed to process connection
The connection between the client and the server may fail at any time. In some use cases, this is problematic because the client may miss certain events when disconnecting. For example, the cluster controller may be interested in whether some nodes have failed, which is indicated by the deletion event of some key s. The client needs to inform the server of the last event received. When the client resets the monitor again, the client sends the last received event number. The server should start sending all events it has recorded from this event number.
In the Consistent Core client, it can be completed when the client re establishes the connection with the leader.
Design based on pull mode in Kafka In a typical design of a monitor, the server pushes monitor events to the client. [Kafka] follows an end-to-end pull based design. In its new architecture, Kafka broker will periodically extract metadata logs from the controller quota (which itself is an example of the Consistent Core). The offset based pull mechanism allows the client to read events from the last known offset like any other Kafka consumer, so as to avoid event loss.
private void connectToLeader(List<InetAddressAndPort> servers) { while (isDisconnected()) { logger.info("Trying to connect to next server"); waitForPossibleLeaderElection(); establishConnectionToLeader(servers); } setWatchesOnNewLeader(); } private void setWatchesOnNewLeader() { for (String watchKey : watches.keySet()) { sendWatchResetRequest(watchKey); } } private void sendWatchResetRequest(String key) { pipelinedConnection.send(new RequestOrResponse(RequestId.SetWatchRequest.getId(), JsonSerDes.serialize(new SetWatchRequest(key, lastWatchedEventIndex)), correlationId.getAndIncrement())); }
The server numbers each event that occurs. For example, if the server is a Consistent Core, it stores all state changes in strict order, and each change uses the log index number discussed in "write ahead log", and then the client can request events from below the specific indicator.
Derive events from a key repository
View the current state of the key value repository to generate an event if it also numbers each change that occurs and stores each numbered value.
When the client reestablishes its connection to the server, it can set up the monitor again and send the last seen change number. The server can then compare it with the stored value, and if it is greater than the value sent by the client, the server can resend the event to the client. Deriving events from the key value store can be awkward because you need to guess events. It may miss some events- For example, if a key is created and then deleted - the create event is lost when the client disconnects.
private synchronized void eventsFromStoreState(String key, long stateChangesSince) { List<StoredValue> values = getValuesForKeyPrefix(key); for (StoredValue value : values) { if (values == null) { //the key was probably deleted send deleted event notify(new WatchEvent(key, EventType.KEY_DELETED), key); } else if (value.index > stateChangesSince) { //the key/value was created/updated after the last event client knows about notify(new WatchEvent(key, value.getValue(), EventType.KEY_ADDED, value.getIndex()), key); } } }
[zookeeper] use this method. By default, the monitor in zookeeper is also a one-time trigger. After triggering the event, if the client wants to receive other events, it needs to set up the monitor again. Before resetting the monitor, some events may be missed, so clients need to ensure that they read the latest status so that they don't miss any updates.
Store event history
It is easier to keep a history of past events and reply to clients from the event history. The problem with this approach is that you need to limit the event history to, for example, 1000 events. If the client disconnects for a long time, it may miss more than 1000 events in the event window. The simple implementation of using Google guava's EvictingQueue is as follows:
public class EventHistory implements Logging { Queue<WatchEvent> events = EvictingQueue.create(1000); public void addEvent(WatchEvent e) { getLogger().info("Adding " + e); events.add(e); } public List<WatchEvent> getEvents(String key, Long stateChangesSince) { return this.events.stream() .filter(e -> e.getIndex() > stateChangesSince && e.getKey().equals(key)) .collect(Collectors.toList()); } }
When the client reestablishes the connection and resets the monitoring, the event can be sent from the history.
private void sendEventsFromHistory(String key, long stateChangesSince) { List<WatchEvent> events = eventHistory.getEvents(key, stateChangesSince); for (WatchEvent event : events) { notify(event, event.getKey()); } }
Use multi version storage
Track all changes and use multi version storage. It tracks all versions of each key and can easily get all changes from the requested version.
[etcd] version 3 started using this method
example
[zookeeper] monitors can be set up on nodes.
Products such as [kafka] use it for group membership and fault detection of cluster members.
[etcd] has a monitor implementation, which is widely used in the [kubernetes] resource monitoring implementation.