Processing of Python Programming constraints of simulated annealing algorithm

Posted by Bman900 on Fri, 18 Feb 2022 14:20:59 +0100

1. Optimization and linear programming

The three elements of optimization problem are decision variables, objective function and constraints.

Linear programming is an optimization method to study the extreme value problem of linear objective function under linear constraints. It is often used to solve the problem of using existing resources to obtain the optimal decision.

Simple linear programming problems can be solved by Lingo software. Matlab and Python also have library functions or solvers for solving linear programming problems, which are easy to learn and use, and do not need simulated annealing algorithm. However, integer programming, mixed programming, 0 / 1 programming, quadratic programming, nonlinear programming and combinatorial optimization problems derived from general linear programming problems can not be handled by calling a library function. The simulated annealing algorithm has good adaptability in many complex problems, and can be used as an entry-level general intelligent algorithm to learn.

In other words, if we only deal with linear programming problems, we should not use simulated annealing algorithm. However, if it is a complex optimization problem that cannot be handled by the existing methods, or you don't know what method to deal with a certain kind of optimization problem, simulated annealing algorithm can still be used to solve it.

In this paper, the penalty function method is used to analyze the simulated annealing algorithm to deal with linear programming problems. The related contents are also applicable to nonlinear programming problems.


2. Simulated annealing algorithm deals with constraints

Linear programming problem is a constrained optimization problem, and simulated annealing algorithm is more suitable to deal with unconstrained optimization problems. For the constraints in optimization problems, simulated annealing algorithm has several common methods:

  • 1. Upper and lower limits of decision variables.
    Such constraints are relatively easy to handle, and can be solved as long as the initial solution and new solution are set between the upper and lower limits of the value of decision variables. For example, (1) set the upper and lower limits of the random number generating the new solution as the upper and lower limits of the decision variable, i.e. [Xmin, Xmax]; (2) Set the upper and lower limits of the random number generating the new solution as the upper and lower limits of the current solution and the decision variable, i.e. [Xnow, Xmax]; (3) Through conditional judgment, when the new solution exceeds the upper and lower limits of the decision variable, make it take the upper and lower limits, that is, xNew = max(min(xNew, xMax), xMin). Of course, these processing methods will affect the probability distribution of random numbers and thus the optimization performance of simulated annealing algorithm, which will not be discussed in depth here.

  • 2. Test method to deal with inequality constraints.
    In the iterative process of simulated annealing algorithm, the new solution generated each time is substituted into each inequality constraint function to judge whether the constraint conditions are met; If the new solution does not meet the constraints, discard the new solution and return to generate a new solution for inspection until the new solution meets all the constraints. The idea of this method is simple, and each iteration is carried out in the feasible region. However, for complex problems with many and harsh constraints, the new solutions generated many times can not meet the constraints, which will make the calculation time very long or even stagnate.

  • 3. The elimination method deals with equality constraint problems.
    For equality constraints, it is difficult to meet the constraints through the randomly generated new solutions, and the test method can not be used generally. The elimination method is to solve the equation, express a decision variable in the equality constraint as the linear relationship of other decision variables, and then substitute it into the objective function and inequality constraint conditions, so as to eliminate the constraint conditions. The elimination method not only solves the equality constraint problem, but also reduces the number of decision variables, which effectively simplifies the complexity of the optimization problem. It is a method to kill two birds with one stone. However, for nonlinear equality constraints, it is also very difficult to solve nonlinear equations, and the elimination method is not generally applicable.

  • 4. A more general method to deal with constraints is the penalty function method, which is introduced below. YouCans, XUPT


3. Penalty function method

Penalty function method is a kind of commonly used technology to deal with constraints. It is very effective to deal with constraints in simulated annealing algorithm. The idea of the method is to transform the constraint condition into a penalty function and add it to the original objective function to construct a new objective function; When the constraint conditions are not satisfied, the new objective function is discarded by making the penalty function worse.

Penalty function method includes outer point method and inner point method. The outer point method penalizes the points outside the feasible region (i.e. the points that do not meet the constraints) and does not penalize the points inside the feasible region, so that the iterative points approach the feasible region D. The interior point method searches inside the feasible region, and the constraint boundary acts like a fence, so that the objective function cannot pass through, so the search point is limited to the feasible region, so it is only applicable to inequality constraints.

Consider constrained optimization problems:
m i n : f ( X ) min: f(X) min:f(X)
Meet the limit
c i ( X ) ≤ 0 c_i(X)\leq0 ci​(X)≤0
The penalty function method transforms the problem into the following unconstrained problem
m i n : L k ( X ) = f ( X ) + σ k ∑ i g ( c i ( X ) ) min: L_k(X)=f(X)+\sigma_k\sum_{i}{g(c_i(X))} min:Lk​(X)=f(X)+σk​i∑​g(ci​(X))
among
g ( c i ( X ) ) = m a x ( 0 , c i ( X ) ) 2 g(c_i(X))=max(0,c_i(X))^2 g(ci​(X))=max(0,ci​(X))2

In the above equation, g ( c i ( X ) ) g(ci(X)) g(ci(X)) is called the external penalty function, s i g m a ( k ) sigma(k) sigma(k) is called penalty factor. In each iteration, s i g m a ( k ) sigma(k) sigma(k) increases, and then the unconstrained problem is solved. The results of each iteration will form a sequence, and the limit of this sequence is the solution of the original constraint problem.


4. Digital analog case

Although simulated annealing algorithm is not recommended for linear programming problems. However, in order to facilitate understanding, this paper still uses the previous linear programming problem as a case to deal with constraints. For nonlinear programming problems and nonlinear constraint problems, the treatment methods are similar, which will be introduced later.

4.1 Problem Description:

A factory produces two kinds of drinks, 6 kg of raw materials and 10 workers per 100 boxes of drinks, making a profit of 100000 yuan; Each hundred boxes of B drinks need 5kg of raw materials and 20 workers, making a profit of 90000 yuan.
Today, the factory has a total of 60kg of raw materials and 150 workers. Due to other conditions, the output of a beverage is no more than 800 boxes.
(1) how to arrange the production plan, that is, how much are the two drinks produced to maximize the profit?

4.2 problem modeling:

Decision variables:
x1: output of a beverage (unit: 100 cases)
x2: output of B beverage (unit: 100 cases)
Objective function:
    max fx = 10x1 + 9x2
Constraints:
    6x1 + 5x2 <= 60
    10x1 + 20x2 <= 150
Value range:
Given conditions: x1, X2 > = 0, X1 < = 8
Derivation conditions: from x1, X2 > = 0 and 10 * X1 + 20 * x2 < = 150, we can know: 0 < = X1 < = 15; 0<=x2<=7.5
Therefore, 0 < = X1 < = 8, 0 < = x2 < = 7.5

4.3 penalty function method for solving constrained optimization problems:

Construct penalty function:
    p1 = (max(0, 6*x1+5*x2-60))**2
    p2 = (max(0, 10*x1+20*x2-150))**2
Note: if there are equality constraints, such as x1 + 2*x2 = m, it can also be transformed into penalty function:
    p3 = (x1+2*x2-m)**2
    P(x) = p1 + p2 + ...
Construct augmented objective function:
    L(x,m(k)) = min(fx) + m(k)*P(x)
m(k): penalty factor, which increases gradually with the number of iterations K

In the simulated annealing algorithm, m(k) increases with the number of iterations of the outer loop, but it should remain unchanged in the inner loop.


5. Simulated annealing algorithm Python program: penalty function method to solve constrained optimization linear programming problem

# Simulated annealing algorithm program: penalty function method to solve linear programming problems
# Program: SimulatedAnnealing_v2.py
# Purpose: Simulated annealing algorithm for function optimization
# v2.0: use the penalty function method to deal with the constraint problem
# Copyright 2021 YouCans, XUPT
# Crated: 2021-05-01

#  -*- coding: utf-8 -*-
import math                         # Import module
import random                       # Import module
import pandas as pd                 # Import module YouCans, XUPT
import numpy as np                  # Import the module numpy, which is abbreviated as np
import matplotlib.pyplot as plt     
from datetime import datetime

# Subroutine: defines the objective function of the optimization problem
def cal_Energy(X, nVar, mk): 	# m(k): penalty factor, which increases gradually with the number of iterations K
    p1 = (max(0, 6*X[0]+5*X[1]-60))**2
    p2 = (max(0, 10*X[0]+20*X[1]-150))**2
    fx = -(10*X[0]+9*X[1])
    return fx+mk*(p1+p2)

# Subroutine: parameter setting of simulated annealing algorithm
def ParameterSetting():
    cName = "funcOpt"           # Define the problem name YouCans, XUPT
    nVar = 2                    # Given the number of arguments, y = f (x1,... Xn)
    xMin = [0, 0]               # Lower limit of given search space, x1_min,..xn_min
    xMax = [8, 7.5]             # Upper limit of given search space, x1_max,..xn_max

    tInitial = 100.0            # Set initial annealing temperature
    tFinal  = 1                 # Set stop temperature
    alfa    = 0.98              # Set cooling parameters, T(k)=alfa*T(k-1)
    meanMarkov = 100            # Markov chain length, that is, the number of inner loop operations
    scale   = 0.5               # Define the search step size, which can be set to a fixed value or gradually reduced
    return cName, nVar, xMin, xMax, tInitial, tFinal, alfa, meanMarkov, scale

# Simulated annealing algorithm
def OptimizationSSA(nVar,xMin,xMax,tInitial,tFinal,alfa,meanMarkov,scale):
    # ======Initialize random number generator======
    randseed = random.randint(1, 100)
    random.seed(randseed)  # The random number generator can set the seed or set it to a specified integer

    # ======Initial solution of stochastic optimization problem======
    xInitial = np.zeros((nVar))   # Initialize, create array
    for v in range(nVar):
        # random.uniform(min,max) randomly generates a real number in the range of [min,max]
        xInitial[v] = random.uniform(xMin[v], xMax[v])
    # Call sub function cal_Energy calculates the objective function value of the current solution
    fxInitial = cal_Energy(xInitial, nVar, 1) # m(k): penalty factor, the initial value is 1

    # ======Simulated annealing algorithm initialization======
    xNew = np.zeros((nVar))         # Initialize, create array
    xNow = np.zeros((nVar))         # Initialize, create array
    xBest = np.zeros((nVar))        # Initialize, create array
    xNow[:]  = xInitial[:]          # Initialize the current solution and set the initial solution as the current solution
    xBest[:] = xInitial[:]          # Initialize the optimal solution and set the current solution as the optimal solution
    fxNow  = fxInitial              # Set the objective function of the initial solution to the current value
    fxBest = fxInitial              # Set the objective function of the current solution as the optimal value
    print('x_Initial:{:.6f},{:.6f},\tf(x_Initial):{:.6f}'.format(xInitial[0], xInitial[1], fxInitial))

    recordIter = []                 # Initialization, number of external cycles
    recordFxNow = []                # Initialization, the objective function value of the current solution
    recordFxBest = []               # Initialization, the objective function value of the optimal solution
    recordPBad = []                 # Initialization, acceptance probability of inferior solution
    kIter = 0                       # Number of external loop iterations, number of temperature states
    totalMar = 0                    # Total Markov chain length
    totalImprove = 0                # fxBest improvement times
    nMarkov = meanMarkov            # Fixed length Markov chain

    # ======Start simulated annealing optimization======
    # The external circulation ends when the current temperature reaches the termination temperature
    tNow = tInitial                 # Initialize current temperature
    while tNow >= tFinal:           # The external circulation ends when the current temperature reaches the termination temperature
        # At the current temperature, a sufficient number of state transitions (nMarkov) are performed to achieve thermal equilibrium
        kBetter = 0                 # Number of times to obtain high-quality solutions
        kBadAccept = 0              # Number of times to accept inferior solutions
        kBadRefuse = 0              # Number of rejections of inferior solutions

        # ---Inner loop, the number of cycles is the length of Markov chain
        for k in range(nMarkov):    # Inner loop, the number of cycles is the length of Markov chain
            totalMar += 1           # Total Markov chain length counter

            # ---Generate new solutions
            # Generate a new solution: generate a new solution by randomly disturbing near the current solution. The new solution must be within the range of [min,max]
            # Scheme 1: only one of the n variables is disturbed, and the other n-1 variables remain unchanged
            xNew[:] = xNow[:]
            v = random.randint(0, nVar-1)   # Generate random numbers between [0,nVar-1]
            xNew[v] = xNow[v] + scale * (xMax[v]-xMin[v]) * random.normalvariate(0, 1)
            # random. Normal variable (0, 1): generate a random real number that obeys a normal distribution with a mean of 0 and a standard deviation of 1
            xNew[v] = max(min(xNew[v], xMax[v]), xMin[v])  # Ensure that the new solution is within the range of [min,max]

            # ---Calculate the objective function and energy difference
            # Call sub function cal_Energy calculates the objective function value of the new solution
            fxNew = cal_Energy(xNew, nVar, kIter)
            deltaE = fxNew - fxNow

            # ---Accept the new solution according to Metropolis guidelines
            # Acceptance judgment: decide whether to accept the new solution according to Metropolis criteria
            if fxNew < fxNow:  # Better solution: if the objective function of the new solution is better than the current solution, the new solution is accepted
                accept = True
                kBetter += 1
            else:  # Tolerant solution: if the objective function of the new solution is worse than the current solution, the new solution will be accepted with a certain probability
                pAccept = math.exp(-deltaE / tNow)  # Calculate the state transition probability of the tolerant solution
                if pAccept > random.random():
                    accept = True  # Accept inferior solutions
                    kBadAccept += 1
                else:
                    accept = False  # Reject inferior solutions
                    kBadRefuse += 1

            # Save new solution
            if accept == True:  # If the new solution is accepted, the new solution is saved as the current solution
                xNow[:] = xNew[:]
                fxNow = fxNew
                if fxNew < fxBest:  # If the objective function of the new solution is better than the optimal solution, the new solution is saved as the optimal solution
                    fxBest = fxNew
                    xBest[:] = xNew[:]
                    totalImprove += 1
                    scale = scale*0.99  # Variable search step size, gradually reduce the search range and improve the search accuracy
                    
        # ---Data sorting after internal circulation
        # Complete the search of current temperature, save data and output
        pBadAccept = kBadAccept / (kBadAccept + kBadRefuse)  # Acceptance probability of inferior solution
        recordIter.append(kIter)  # Current external circulation times
        recordFxNow.append(round(fxNow, 4))  # Objective function value of current solution
        recordFxBest.append(round(fxBest, 4))  # Objective function value of optimal solution
        recordPBad.append(round(pBadAccept, 4))  # Objective function value of optimal solution

        if kIter%10 == 0:                           # Modular operation, remainder of quotient
            print('i:{},t(i):{:.2f}, badAccept:{:.6f}, f(x)_best:{:.6f}'.\
                format(kIter, tNow, pBadAccept, fxBest))

        # Slowly cool down to a new temperature. Cooling curve: T(k)=alfa*T(k-1)
        tNow = tNow * alfa
        kIter = kIter + 1
        fxBest = cal_Energy(xBest, nVar, kIter)  # Since the penalty factor increases after iteration, the augmented objective function needs to be reconstructed
        # ======End the simulated annealing process======

    print('improve:{:d}'.format(totalImprove))
    return kIter,xBest,fxBest,fxNow,recordIter,recordFxNow,recordFxBest,recordPBad

# Result verification and output
def ResultOutput(cName,nVar,xBest,fxBest,kIter,recordFxNow,recordFxBest,recordPBad,recordIter):
    # ======Verification and output of optimization results======
    fxCheck = cal_Energy(xBest, nVar, kIter)
    if abs(fxBest - fxCheck)>1e-3:   # Objective function test
        print("Error 2: Wrong total millage!")
        return
    else:
        print("\nOptimization by simulated annealing algorithm:")
        for i in range(nVar):
            print('\tx[{}] = {:.6f}'.format(i,xBest[i]))
        print('\n\tf(x):{:.6f}'.format(cal_Energy(xBest,nVar,0)))
    return


# main program
def main(): # YouCans, XUPT

    # Parameter setting, optimization problem parameter definition, simulated annealing algorithm parameter setting
    [cName, nVar, xMin, xMax, tInitial, tFinal, alfa, meanMarkov, scale] = ParameterSetting()
    # print([nVar, xMin, xMax, tInitial, tFinal, alfa, meanMarkov, scale])

    # Simulated annealing algorithm [giter, xbest, fxbest, fxnow, recorditer, recordfxnow, recordfxbest, recordpbad]\
        = OptimizationSSA(nVar,xMin,xMax,tInitial,tFinal,alfa,meanMarkov,scale)
    # print(kIter, fxNow, fxBest, pBadAccept)

    # Result verification and output
    ResultOutput(cName, nVar,xBest,fxBest,kIter,recordFxNow,recordFxBest,recordPBad,recordIter)


if __name__ == '__main__':
    main()


6. Operation results

Optimization by simulated annealing algorithm:
    x[0] = 6.577964
    x[1] = 4.111469
    f(x):-102.782857

reference:

(1) Hu Shanying, Chen Bingzhen, simulated annealing method for global optimization of nonlinear programming problems, Journal of Tsinghua University, 1997,37 (6): 5-9
(2) Li Qiqiang, simulated annealing algorithm with constraint guidance, systems engineering, 2001,19 (3): 49-55

Copyright description:

Original works
Copyright 2021 YouCans, XUPT
Crated: 2021-05-01

Topics: Python Programming Algorithm AI Mathematical Modeling