Algorithm 1: divide and conquer
Basic concepts
1. Divide a complex problem into two or more identical or similar subproblems, and then divide the subproblem into smaller subproblems... Until finally, the subproblem can be simply solved directly, and the solution of the original problem is the combination of the solutions of the subproblems.
2. The divide and conquer strategy is to solve a problem with scale n directly if the problem can be easily solved (for example, the scale n is small), otherwise it will be decomposed into k smaller subproblems. These subproblems are independent of each other and have the same form as the original problem, solve these subproblems recursively, and then merge the solutions of each subproblem to obtain the solution of the original problem.
Application
1) The problem can be easily solved if the scale is reduced to a certain extent
2) The problem can be decomposed into several smaller same problems, that is, the problem has the property of optimal substructure.
3) The solutions of the subproblems decomposed by the problem can be combined into the solutions of the problem;
The subproblems decomposed by the problem are independent of each other, that is, there are no common subproblems between subproblems.
Complexity analysis of divide and conquer method
A divide and conquer method divides the problem with scale n into k subproblems with scale n / m. Let the decomposition threshold n0=1 and the ad hoc solution scale is 1, which takes 1 unit time. Then suppose that it takes f(n) unit time to decompose the original problem into k subproblems and merge the solutions of K subproblems into the solutions of the original problem. T(n) represents the calculation time required for the divide and conquer method to solve the problem with scale | P|=n, then:
T(n)= k T(n/m)+f(n)
The solution of the equation is obtained by iterative method:
The recursive equation and its solution only give the value of T(n) when n is equal to the power of m, but if T(n) is considered smooth enough, the growth rate of T(n) can be estimated from the value of T(n) when n is equal to the power of m. It is generally assumed that T(n) increases monotonically, so that when mi ≤ n < Mi + 1, T(mi) ≤ T(n) < T (MI + 1).
Divide and conquer example: merge sort and quick sort
public class Divide and conquer_Merge sort { /** * Function Description: merge after the array is split */ static void Merge(int a[], int left, int middle, int rigth) { //Defines the size of the left-hand array int n1 = middle - left+1; int n2 = rigth - middle; //Initialize the array and allocate memory int bejin[] = new int[n1]; int end[] = new int[n2]; //Array assignment for(int i = 0; i < n1; i++) bejin[i] = a[left + i]; for(int i = 0; i < n2; i++) end[i] = a[middle+1+i]; //The key is used to index the original array, and the function is not called once to pay the value to the original array again int i = 0, j = 0, key; for(key = left; key <= rigth; key++){ if(n1>i&&n2>j&&i < n1 && bejin[i] <= end[j]) a[key] = bejin[i++]; else if(n1>i&&n2>j&&j < n2 && bejin[i] >= end[j]) a[key] = end[j++]; else if(i == n1 && j < n2) a[key] = end[j++]; else if(j == n2 && i < n1) a[key] = bejin[i++]; } } /** * Differential array, interval, continuous branching */ static void MergeSort(int a[],int left,int rigth) { int middle=0; if(left<rigth) { middle =(rigth+left)/2; MergeSort(a, left, middle); MergeSort(a, middle+1, rigth); Merge(a, left, middle, rigth); } } public static void main(String[] args) { int a[]= {85,3,52,9,7,1,5,4}; MergeSort(a, 0,7); for(int i=0;i<8;i++) { System.out.print(" "+a[i]); } } }
public class Divide and conquer_Quick sort { /** *Exchange function, i and j are array indexes */ static void swap(int A[], int i, int j) { int temp = A[i]; A[i] = A[j]; A[j] = temp; } /** * Select a key as the pivot. Generally, the first / last number of the group of records is rounded. Here, the last number of the selection sequence is used as the pivot. * Set two variables left = 0;right = N - 1; * Go backward from left until you find a value greater than key, right from back to front until you find a value less than key, and then exchange the two numbers. * Repeat step 3 and keep looking back until left and right meet. At this time, place the key in the left position. * @return */ static int PartSort(int[] array,int left,int right) { int key = array[right];//Define datum int count=right;//Save rig value while(left < right)//Prevent array out of bounds { while(left < right && array[left] <= key) { ++left; } while(left < right && array[right] >= key) { --right; } swap(array,left,right); } swap(array,right,count); return right; } /** *Divide and conquer idea, recursive call */ static void QuickSort(int array[],int left,int right) { if(left >= right)//Indicates that a group has been completed { return; } int index = PartSort(array,left,right);//Pivot position QuickSort(array,left,index - 1); QuickSort(array,index + 1,right); } public static void main(String[] args) { int a[]= {1,5,-5,54,15,67,16,23}; QuickSort(a,0,7); for(int i=0;i<a.length;i++) { System.out.print(" "+a[i]); } System.out.print("\n"); } }
Algorithm experience
As a typical algorithm in divide and conquer, merge sort and quick sort fully show the idea of divide and conquer, divide and conquer. In this programming using this method, my experience is that the program is simply divided into two parts. The first part is to continuously "disassemble" to reduce the size of the problem and achieve the optimal substructure. Then merge. In the process of merging, the sub problems should be small enough to be easy to calculate. In addition, the answers to the sub problems should be merged continuously to finally find the problem solution.
Algorithm 2: greedy algorithm
1, Basic concepts:
The so-called greedy algorithm means that when solving the problem, it always makes the best choice at present. In other words, without considering the overall optimization, what he makes is only the local optimal solution in a sense.
Greedy algorithm has no fixed algorithm framework, and the key of algorithm design is the choice of greedy strategy. It must be noted that the greedy algorithm can not obtain the overall optimal solution for all problems, and the selected greedy strategy must have no aftereffect, that is, the process after a certain state will not affect the previous state, but only related to the current state.
Therefore, we must carefully analyze whether the greedy strategy meets the no aftereffect.
2, Basic idea of greedy algorithm:
1. Establish a mathematical model to describe the problem.
2. Divide the problem into several subproblems.
3. Solve each subproblem to obtain the local optimal solution of the subproblem.
4. The local optimal solution of the subproblem is synthesized into a solution of the original solution problem.
3, Application of greedy algorithm
The premise of greedy strategy is that the local optimal strategy can lead to the global optimal solution.
In fact, greedy algorithm is rarely applicable. Generally, whether a problem analysis is suitable for greedy algorithm can be judged by selecting several actual data under the problem for analysis.
4, Implementation framework of greedy algorithm
Starting from an initial solution of the problem;
while (one step towards a given overall goal)
Using the feasible decision, a solution element of the feasible solution is obtained;
A feasible solution of the problem is composed of all Solution elements;
5, Choice of greedy strategy
Because the greedy algorithm can only achieve the global optimal solution through the strategy of solving the local optimal solution, we must pay attention to judge whether the problem is suitable for the greedy algorithm strategy and whether the found solution must be the optimal solution of the problem.
Example of greedy strategy: prim algorithm
import java.util.*; public class Greedy Algorithm _prim algorithm { static int MAX = Integer.MAX_VALUE; public static void main(String[] args) { //Define undirected graph matrix int[][] map = new int[][] { { 0, 1, 6, 2}, { 1, 0, 3, 2}, { 6, 3, 0, 1}, { 2, 2, 1, 0} }; prim(map, map.length); } public static void prim(int[][] graph, int n){ //Define node name char[] c = new char[]{'A','B','C','D'}; int[] lowcost = new int[n]; //Minimum weight to new set int[] mid= new int[n];//Access precursor node List<Character> list=new ArrayList<Character>();//Used to store the order of joining nodes int i, j, min, minid , sum = 0; //Initialize auxiliary array for(i=1;i<n;i++) { lowcost[i]=graph[0][i]; mid[i]=0; } list.add(c[0]); //A total of n-1 points need to be added for(i=1;i<n;i++) { min=MAX; minid=0; //Find the point closest to the collection each time for(j=1;j<n;j++) { if(lowcost[j]!=0&&lowcost[j]<min) { min=lowcost[j]; minid=j; } } if(minid==0) return; list.add(c[minid]); lowcost[minid]=0; sum+=min; System.out.println(c[mid[minid]] + "reach" + c[minid] + " Weight:" + min); //After adding this point, update the distance from other points to the collection for(j=1;j<n;j++) { if(lowcost[j]!=0&&lowcost[j]>graph[minid][j]) { lowcost[j]=graph[minid][j]; mid[j]=minid; } } System.out.print("\n"); } System.out.println("sum:" + sum); } }
Algorithm experience
Prim algorithm is a good embodiment of the greedy strategy. In the implementation of prim algorithm, we realize that the greedy strategy includes all the choices first, stores them, and selects the most suitable step according to the greedy strategy. Although the greedy strategy is relatively fast, it should be because it does not need to budget all situations (similar to backtracking), but it should be that each time it seeks only the local optimal solution, so the result is not necessarily the optimal solution. The accuracy of the algorithm is related to the selection of greedy strategy, so it also has certain limitations!
Algorithm 3: dynamic programming algorithm
1, Basic concepts
The dynamic programming process is: each decision depends on the current state, and then causes the state transition. A decision sequence is produced in the changing state. Therefore, this multi-stage optimal decision-making process is called dynamic programming.
2, Basic ideas and Strategies
The basic idea is similar to the divide and conquer method, It also decomposes the problem to be solved into several subproblems (stage), solve the sub stages in sequence. The solution of the former sub problem provides useful information for the solution of the latter sub problem. When solving any sub problem, list various possible local solutions, retain those local solutions that may reach the optimal through decision-making, and discard other local solutions. Solve each sub problem in sequence, and the last sub problem is the solution of the initial problem.
Because most of the problems solved by dynamic programming have the characteristics of overlapping subproblems, in order to reduce repeated calculation, each subproblem is solved only once, and its different states in different stages are saved in a two-dimensional array.
The biggest difference from the divide and conquer method is that the sub problems obtained after decomposition are often not independent of each other (that is, the solution of the next sub stage is based on the solution of the previous sub stage).
3, Where applicable
Problems that can be solved by dynamic programming generally have three properties:
(1) optimization principle: if the solution of the subproblem contained in the optimal solution of the problem is also optimal, it is said that the problem has an optimal substructure, that is, it satisfies the optimization principle.
(2) no aftereffect: that is, once the state of a certain stage is determined, it will not be affected by the subsequent decisions of this state. In other words, the process after a certain state will not affect the previous state, but only related to the current state.
(3) overlapping subproblems: that is, subproblems are not independent, and a subproblem may be used many times in the next stage of decision-making. (this property is not a necessary condition for the application of dynamic programming, but without this property, the dynamic programming algorithm has no advantages over other algorithms.)
Algorithm example: knapsack problem
public class dynamic programming_knapsack problem { public static void main(String[] args) { //Item value, weight, and backpack load int v[]={0,8,10,6,3,7,2}; int w[]={0,4,6,2,2,5,1}; int c=12; //Define binary array dynamic programming knapsack value and weight int m[][]=new int[v.length][c+1]; for (int i = 1; i <v.length; i++) { for (int j = 1; j <=c; j++) { if(j>=w[i]) m[i][j]=m[i-1][j-w[i]]+v[i]>m[i-1][j]?m[i-1][j-w[i]]+v[i]:m[i-1][j]; else m[i][j]=m[i-1][j]; } } int max=0; for (int i = 0; i <v.length; i++) { for (int j = 0; j <=c; j++) { if(m[i][j]>max) max=m[i][j]; } } System.out.println(max); } }
4, Algorithm experience
In this programming, the dynamic memory algorithm is used to solve the knapsack problem. The amount of allocated space required for the start is relatively large. It's OK when the knapsack capacity is small and the material level is small. If the number is 1, the memory occupation will be serious and the amount of calculation will be greatly increased. Dynamic memory allocation is similar to divide and conquer method, which divides the problem into multiple sub problems and solves them step by step, and the sub problems solved earlier will have an impact on the sub problems solved later, unlike the sub problems of divide and conquer method, which are independent. And always give a state value and record the optimal solution. When all sub problems are solved, the optimal solution will become the solution of the problem. The focus is mainly on the allocation of memory and the calculation of sub problems.
Algorithm 4: backtracking method
1. Concept
Backtracking algorithm is actually a search attempt process similar to enumeration. It is mainly to find the solution of the problem in the search attempt process. When it is found that the solution conditions are not met, it will "backtrack" back and try other paths.
Backtracking method is an optimization search method, which searches forward according to the optimization conditions to achieve the goal. However, when a certain step is explored and it is found that the original selection is not excellent or fails to achieve the goal, it will go back to one step and reselect. This technology of going back and going again if it fails is the backtracking method, and the point in a certain state that meets the backtracking conditions is called the "backtracking point".
Many complex and large-scale problems can use backtracking method, which is known as "general problem-solving method".
2. Basic thought
In the solution space tree containing all solutions of the problem, according to the strategy of depth first search, the solution space tree is explored from the root node. When exploring a node, we should first judge whether the node contains the solution of the problem. If it does, we will continue to explore from the node. If the node does not contain the solution of the problem, we will trace back to its ancestor node layer by layer. In fact, backtracking is a depth first search algorithm for implicit graphs.
If we use the backtracking method to find all the solutions of the problem, we should backtrack to the root, and all feasible subtrees of the root node must have been searched.
If the backtracking method is used to find any solution, it can end as long as a solution of the problem is found.
3. General steps for solving problems with backtracking method:
(1) For the given problem, determine the solution space of the problem:
Firstly, the solution space of the problem should be clearly defined, and the solution space of the problem should contain at least one (optimal) solution of the problem.
(2) Determine the extended search rules of nodes
(3) The solution space is searched by depth first method, and the pruning function is used to avoid invalid search.
4. Algorithm example: finding subset problem
public class Backtracking method_Subset problem { private static int[] s = {2,2,3}; private static int n = s.length; private static int[] x = new int[n]; /** * Subset of output set * @param limit Decide to select a subset of specific conditions * Note: all refers to all subsets, num refers to the subset limiting the number of elements * sp The parity of the qualifying elements is the same, and the sum is less than 8. */ public static void all_subset(String limit){ switch(limit){ case "all":backtrack(0);break; case "num":backtrack1(0);break; case "sp":backtrack2(0);break; } } /** * The backtracking method finds all subsets of the set and recurses in turn * Note: the condition for backtracking is the essence * @param t */ private static void backtrack(int t){ if(t >= n) output(x); else for (int i = 0; i <= 1; i++) { x[t] = i; backtrack(t+1); } } /** * The backtracking method finds all subsets of the set (the number of elements is less than 4) and recurses in turn * @param t */ private static void backtrack1(int t){ if(t >= n) output(x); else for (int i = 0; i <= 1; i++) { x[t] = i; if(count(x, t) < 4) backtrack1(t+1); } } /** * ((pruning) * Restriction condition: if the subset element is less than 4, judge the number of selected elements between 0 and t * Because at this time, the elements after t have not been recursive, that is, the elements after t are determined * Should it be called recursively * @param x * @param t * @return */ private static int count(int[] x, int t) { int num = 0; for (int i = 0; i <= t; i++) { if(x[i] == 1){ num++; } } return num; } /** * The backtracking method finds the subset with the same parity of elements in the set and less than 8, and recurses in turn * @param t */ private static void backtrack2(int t){ if(t >= n) output(x); else for (int i = 0; i <= 1; i++) { x[t] = i; if(legal(x, t)) backtrack2(t+1); } } /** * To judge the parity of elements in the subset, the array sum of elements should be less than 8 * @param x * @param t * @return */ private static boolean legal(int[] x, int t) { boolean bRet = true; //Determine whether pruning is required int part = 0; //Benchmark for parity judgment for (int i = 0; i <= t; i++) { //Select the first element as the benchmark for parity judgment if(x[i] == 1){ part = i; break; } } for (int i = 0; i <= t; i++) { if(x[i] == 1){ bRet &= ((s[part] - s[i]) % 2 == 0); } } int sum = 0; for(int i = 0; i <= t; i++){ if(x[i] == 1) sum += s[i]; } bRet &= (sum < 8); return bRet; } /** * Subset output function * @param x */ private static void output(int[] x) { for (int i = 0; i < x.length; i++) { if(x[i] == 1){ System.out.print(s[i]); } } System.out.println(); } public static void main(String[] args) { all_subset("all"); } }
5. Algorithm experience
Backtracking is an almost universal algorithm, which is useful for both large-scale and small-scale problems. In this subset problem, I think there are two wonderful uses of backtracking. First, it adopts the depth first traversal algorithm, which can access all child nodes from the root node, which has the wonderful use of pruning. When there are parity restrictions and summation restrictions, It can remove these unnecessary child nodes and grandchildren after child nodes, which greatly reduces the waste of time. Second, the simplicity of the algorithm framework enables users to clearly understand the way the code is carried out.
Algorithm 5: branch and bound method
1, Basic description
Similar to the backtracking method, it is also an algorithm to search the solution of the problem on the solution space tree t. However, in general, the objectives of branch and bound method and backtracking method are different. The solution goal of backtracking method is to find all solutions that meet the constraints in T, while the solution goal of branch and bound method is to find a solution that meets the constraints, or to find the solution that makes the value of an objective function reach the maximum or minimum in the solutions that meet the constraints, that is, the optimal solution in a certain sense.
(1) Branch search algorithm
The so-called "branch" is to use the breadth first strategy to search all branches of E-node, that is, all adjacent nodes, discard the nodes that do not meet the constraints, and add the other nodes to the flexible node table. Then select a node from the table as the next E-node to continue the search.
If the next E-node is selected in different ways, there will be several different branch search methods.
1) FIFO search
2) LIFO search
3) priority queue search
(2) Branch and bound search algorithm
2, General process of branch and bound method
Due to different solution objectives, the search methods of branch and bound method and backtracking method on solution space tree T are also different. The backtracking method searches the solution space tree T in the way of depth first, while the branch and bound method searches the solution space tree T in the way of breadth first or minimum cost first.
The search strategy of branch and bound method is: at the expansion node, Mr. becomes all its son nodes (branches), and then selects the next expansion peer from the current live node table. In order to effectively select the next expansion node to speed up the search process, calculate a function value (bound) at each live node, and according to these calculated function values, java training Select the most favorable node from the current flexible node table as the expansion node, so that the search moves towards the branch with the optimal solution on the solution space tree, so as to find an optimal solution as soon as possible.
Branch and bound methods often take breadth first or minimum cost (maximum benefit) search the solution space tree of the problem in a priority way. The solution space tree of the problem is an ordered tree representing the solution space of the problem, which commonly includes a subset tree and an arrangement tree. When searching the solution space tree of the problem, the branch and bound method and the backtracking method use different expansion methods for the current expansion node. In the branch and bound method, each living node has only one chance to become a node Extension node. Once a flexible node becomes an extension node, all its child nodes are generated at one time. Among these son nodes, those that lead to infeasible solutions or non optimal solutions are discarded, and the other son nodes are added to the flexible node table. After that, take a node from the flexible node table to become the current expansion node, and repeat the above node expansion process. This process continues until the desired solution is found or the binding point table is empty.
3, Some differences between backtracking method and branch and bound method
In fact, some problems can be solved well by backtracking method or branch and bound method, but others are not. Maybe we need some specific analysis - when to use branch bounds and when to use backtracking?
Some differences between backtracking method and branch and bound method:
Methods the search method of solution space tree is used to store the common data structure and storage characteristics of nodes
The backtracking method depth first searches all feasible sub nodes of the live node of the stack, which are traversed before being popped out of the stack to find all solutions that meet the constraints
Branch and bound method breadth first or minimum consumption first search queue. Each node of the priority queue has only one chance to become a live node to find a solution that meets the constraints or the optimal solution in a specific sense
import java.util.Collections; import java.util.LinkedList; public class Branch and bound method_Finding the maximum bearing problem { LinkedList<HeapNode> heap; public static class BBnode{ BBnode parent;//Parent node boolean leftChild;//Left node flag //Construction method public BBnode(BBnode par,boolean ch){ parent=par; leftChild=ch; } } /** * Output function for debugging * @param list */ public static void printReverse(LinkedList<HeapNode> list){ for (int i=0;i<list.size();i++) { HeapNode aBnode=list.get(i); System.out.print("#"+aBnode.uweight+"#"+aBnode.level+" "); } } /* * The type of live node stored in the maximum priority queue is HeapNode */ public static class HeapNode implements Comparable{ BBnode liveNode; int uweight;//Knot priority (upper bound) int level;//The sequence number of the layer where the knot point is located in the subset tree species //Constructor public HeapNode(BBnode node,int up,int lev){ liveNode=node; uweight=up; level=lev; } @Override public int compareTo(Object x) {//Ascending arrangement int xu=((HeapNode)x).uweight; if(uweight<xu) return -1; if(uweight==xu) return 0; return 1; } public boolean equals(Object x){ return uweight==((HeapNode)x).uweight; } } public void addLiveNode(int up,int lev,BBnode par,boolean ch){ //Add the liveknot to the maximum heap H representing the liveknot priority queue BBnode b=new BBnode(par,ch); HeapNode node=new HeapNode(b,up,lev); heap.add(node); Collections.sort(heap); } public int maxLoading(int[] w,int c,int[] bestx){ int count=0; //Priority queue branch and bound method returns the optimal weight and bestx returns the optimal solution heap=new LinkedList<HeapNode>(); int n=w.length-1; BBnode e=null;//Current extension node int i=1;//The layer of the current extension node int ew=0;//Load capacity corresponding to expansion node //Define remaining weight array r int[] r=new int[n+1]; for(int j=n-1;j>0;j--) { r[j]=r[j+1]+w[j+1]; } //Search subset space tree while(i!=n+1){ //Non leaf node //Check the child node of the current extension node if(ew+w[i]<=c){ //The left node is a feasible node addLiveNode(ew+w[i]+r[i],i+1,e,true); } //The right node is always a feasible node addLiveNode(ew+r[i],i+1,e,false); //printReverse(heap); //Remove a node HeapNode node=heap.pollLast(); i=node.level; e=node.liveNode; ew=node.uweight-r[i-1]; } //output for(int j=0;j<n;j++){ bestx[j]=(e.leftChild)?1:0; e=e.parent; } for(int j=n-1;j>=0;j--){ System.out.print(bestx[j]+" "); } System.out.println(); return ew; } public static void main(String[] args) { int n=4; int c=70; int w[]={0,26,60,22,18};//Subscript starts with 1 int[] bestx=new int[n+1]; Branch and bound method_Finding the maximum bearing problem b=new Branch and bound method_Finding the maximum bearing problem(); System.out.println("The optimal loading order is (1 means loaded, 0 means not loaded):"); int ew=b.maxLoading(w, c, bestx); System.out.println("The optimum loading weight is:"+ew); } }