Cache Failure Algorithms

Posted by V0oD0o on Sat, 05 Feb 2022 18:37:04 +0100

FIFO

First In First Out, first come first. This algorithm removes the earliest data inserted when the queue is full each time new data is inserted.

Easy to implement with LinkedList

package cache;

import java.util.Iterator;
import java.util.LinkedList;

public class FIFO {

	LinkedList<Integer> fifo = new LinkedList<Integer>();
	int size = 3;
	
	private void print() {
		System.out.println(this.fifo);

	}
	
	public void add(int i) {
		fifo.addFirst(i);
		if(fifo.size() > size) {
			fifo.removeLast();
		}
		print();
	}
	
	private void read(int i) {
		
		Iterator<Integer> iterator = fifo.iterator();
		while(iterator.hasNext()) {
			int j = iterator.next();
			if(i == j) {
				System.out.println("find it!");
				print();
				return;
			}
		}
		System.out.println("not found");
		print();
	}
	
	public static void main(String[] args) {
		FIFO fifo = new FIFO();
		fifo.add(1);
		fifo.add(2);
		fifo.add(3);
		fifo.read(1);
		fifo.read(100);
	}
}

Longest Unused Elimination (LRU)

The full name of the LRU is Least Recently Used, which eliminates the longest-running value from the last use. FIFO is very rude and kicks away elements that have been around for a long time, whether they are useful or not. LRU believes that data that has been used frequently recently will also be used frequently in the future, eliminating lazy data. LinkedHashMap, Array, and Chain List all implement LRU. Here's another example of a Chain List: newly added data is placed in the header, recently accessed data is moved to the header, and tail elements are deleted when space is full

package cache;

import java.util.Iterator;
import java.util.LinkedList;

public class LRU {

	LinkedList<Integer> lru = new LinkedList<Integer>();
	int size = 3;
	
	private void print() {
		System.out.println(this.lru);
	}
	
	public void add(int i) {
		lru.addFirst(i);
		if(lru.size() > size) {
			lru.removeLast();
		}
		print();
	}
	
	private void read(int i) {
		
		Iterator<Integer> iterator = lru.iterator();
		int index = 0;
		
		while(iterator.hasNext()) {
			int j = iterator.next();
			if(i == j) {
				System.out.println("find it!");
				lru.remove(index);
				lru.addFirst(j);
				
				print();
				return;
			}
			index++;
		}
		System.out.println("not found");
		print();
	}
	
	public static void main(String[] args) {
		LRU lru = new LRU();
		lru.add(1);
		lru.add(2);
		lru.add(3);
		lru.read(2);
		lru.read(100);
	}
}

Least recently used (LFU)

Least Frequently Used, the least recently used. What it will eliminate is the value that has been used least in the last period of time. You can think of it as a multiple judgment than the LRU. LFU requires time and number of dimensions as reference indicators. It is important to note that two dimensions may involve the same number of visits over the same time period, so you must have a built-in counter and a queue, a counter count, and the access time when the queue places the same count.

package cache;

public class Dto implements Comparable<Dto> {

	private Integer key;
	private int count;
	private long lastTime;
	
	public Dto(Integer key, int count, long lastTime) {
		this.key = key;
		this.count = count;
		this.lastTime = lastTime;
	}

	@Override
	public int compareTo(Dto o) {
		// TODO Auto-generated method stub
		int compare = Integer.compare(this.count, o.count);
		return compare == 0 ?Long.compare(this.lastTime, o.lastTime) :compare;
	}

	@Override
	public String toString() {
		return String.format("[key=%s,count=%s,lastTime=%s]",key,count,lastTime);
	}
	
	public Integer getKey() {
		return key;
	}
	public void setKey(Integer key) {
		this.key = key;
	}
	public int getCount() {
		return count;
	}
	public void setCount(int count) {
		this.count = count;
	}
	public long getLastTime() {
		return lastTime;
	}
	public void setLastTime(long lastTime) {
		this.lastTime = lastTime;
	}
}
package cache;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;

public class LFU {

	private final int size =3;
	
	private Map<Integer, Integer> cache = new HashMap<>();
	private Map<Integer, Dto> count = new HashMap<>();
	
	private void print() {
		System.out.println("cache="+cache);
		System.out.println("count="+count);
	}
	
	private void removeElement() {
		Dto dto = Collections.min(count.values());
		cache.remove(dto.getKey());
		count.remove(dto.getKey());
	}
	
	private void addCount(Integer key) {
		Dto Dto = count.get(key);
		Dto.setCount(Dto.getCount()+1);
		Dto.setLastTime(System.currentTimeMillis());
	}
	
	
	
	private void put(Integer key, Integer value) {
		Integer v = cache.get(key);
		if(v == null) {
			if(cache.size() == size) {
				removeElement();
			}
			count.put(key, new Dto(key, 1, System.currentTimeMillis()));
		}else {
			addCount(key);
		}
		cache.put(key, value);
	}
	
	public Integer get(Integer key) {
		Integer value = cache.get(key);
		if(value != null) {
			addCount(key);
			return value;
		}
		return null;
	}
	
	public static void main(String[] args) {
		LFU lfu = new LFU();
		
		System.out.println("add 1-3:");
		lfu.put(1, 1);
		lfu.put(2, 2);
		lfu.put(3, 3);
		lfu.print();
		
		System.out.println("1,2 Access, 3 No, Add 4, Eliminate 3");
		lfu.get(1);
		lfu.get(2);
		lfu.print();
		lfu.put(4, 4);
		lfu.print();
		
		System.out.println("2=3 Secondary, 1,4=2 Secondly, but 4 joins later and 5 joins to eliminate 1");
		lfu.get(2);
		lfu.get(4);
		lfu.print();
		System.out.println("add 5:");
		lfu.put(5, 5);
		lfu.print();
	}
}

application

redis is a typical scenario where caching fails. Common strategies are as follows:

  • noeviction: Return error information directly (dangerous) if more memory is needed when the maximum memory limit is reached without deleting the policy.
  • allkeys-lru: For all keys, delete the least recently used key (LRU) first.
  • allkeys-random: For all keys, randomly delete a part (which sounds unreasonable).
  • volatile-lru: Restrict to the key with expire set, and delete the least recently used key (LRU) first.
  • volatile-random: Only key with expire set, delete a part randomly.
  • volatile-ttl: Restrict to key with expire set, delete keys with short remaining time (TTL) first.

Topics: Algorithm