<> one , Problem description

Using genetic algorithm to solve some typical binary single objective function optimization problems , On five binary optimization functions ( Function expression , Range of decision variables ) Solve the problem , The results should be as accurate as possible .

The five functions are :

Two dimensional spherical function : f 1 ( x , y ) = x 2 + y 2 f _ { 1 } ( x , y ) = x ^ { 2 } + y ^ { 2 } f
1​(x,y)=x2+y2, x , y ∈ [ − 5.12 , 5.12 ] x , y \in [ - 5.12,5.12 ] x,y∈[−5.12,5.
12]

De-Jong function : f 2 ( x , y ) = 100 ( y − x 2 ) 2 + ( x − 1 ) 2 f _ { 2 } ( x , y )
= 100 \left( y - x ^ { 2 } \right) ^ { 2 } + ( x - 1 ) ^ { 2 }f2​(x,y)=100(y−x2)
2+(x−1)2, x , y ∈ [ − 2.048 , 2.048 ] x , y \in [ - 2.048,2.048 ] x,y∈[−2.048,2.
048]

Himmelblau function : f 3 ( x , y ) = ( x 2 + y − 11 ) 2 + ( x + y 2 − 7 ) 2 f _ { 3
} ( x , y ) = \left( x ^ { 2 } + y - 11 \right) ^ { 2 } + \left( x + y ^ { 2 }
- 7 \right) ^ { 2 }f3​(x,y)=(x2+y−11)2+(x+y2−7)2, x , y ∈ [ − 6 , 6 ] x , y \in
[ - 6,6 ]x,y∈[−6,6]

SIX-HUMP CAMEL function : f 4 ( x , y ) = 4 x 2 − 2.1 x 4 + x 6 / 3 + x y + 4 y 4 − 4
y 2 f _ { 4 } ( x , y ) =4 x ^ 2 - 2.1 x ^ 4 + x ^6 / 3 + xy + 4 y ^4 - 4 y^ 2f4
​(x,y)=4x2−2.1x4+x6/3+xy+4y4−4y2, x , y ∈ [ − 5 , 5 ] x , y \in [ - 5,5 ] x,y∈[−
5,5]

BOHACHEVSKY function : f 5 ( x , y ) = x 2 + 2 y 2 − 0.3 cos ⁡ 3 π x cos ⁡ 4 π y +
0.3 f _ { 5 } ( x , y ) = x ^ { 2 } + 2 y ^ { 2 } - 0.3 \cos 3 \pi x \cos 4 \pi
y + 0.3f5​(x,y)=x2+2y2−0.3cos3πxcos4πy+0.3, x , y ∈ [ − 1 , 1 ] x , y \in [ -
1,1 ]x,y∈[−1,1]

<> two , Solutions and Solutions

Genetic algorithm (GA) is a kind of random search algorithm which is based on natural selection and genetic mechanism in biology . It simulates natural selection and environmental selection in genetic process , Sexual reproduction and gene mutation , In each iteration, the best individuals are selected from the population , Crossover operator and mutation operator are used to make individuals cross ( The main ways to produce new individuals ) And variation ( The auxiliary way to produce new individuals , It is beneficial to the local optimal solution of jump out function ), Better adaptability ( Make the function more optimal ) A new generation of population , Repeat the process , Until some convergence index is satisfied . The main contents of applying genetic algorithm are as follows : Generation of initial population , Determination of fitness function , Genetic operator ( choice , overlapping , variation ) The determination of , Termination conditions .

(1) Generation of initial population

The initial population is encoded by binary code , Before the initial population is produced , To determine the number of genes on the chromosome , This is determined by the precision the algorithm wants to achieve , If the final decision variable is exactly 5( Accurate to five decimal places ), According to the formula
2 l j − 1 < ( b j − a j ) × 1 0 5 ≤ 2 l j − 1 2 ^ { l _ { j } - 1 } < \left( b
_ { j } - a _ { j } \right) \times 10 ^ { 5 } \leq 2 ^ { l _ { j } } - 12lj​−1<(
bj​−aj​)×105≤2lj​−1, The number of genes needed for the decision variable can be determined l j l _ { j } lj​, among b j b_ { j } bj​ and a
j a_ { j }aj​ It is the upper and lower bounds of the decision variable , The number of genes needed by an individual is the number of genes corresponding to all decision variables l j l _ { j } lj​ The sum of .

After the number of genes has been determined , The initial population can be produced , The binary code is randomly generated ,

(2) Fitness function

The fitness evaluation function reflects the individual's adaptability to the environment , In the above minimization functions ,f The higher the function value is, the stronger its adaptability is . In order to conform to the normal habits , Will ask f The problem of minimum value is transformed into solving the problem of minimum value -f
Maximum value of , Then the fitness evaluation function is -f, The higher the value is , The more adaptable , The more they should be kept .

(3) Determination of genetic operators

For the selection step , Take roulette options , The idea is that individuals with greater fitness are more likely to be retained . In addition, the elite retention strategy is added , That is to say, the individual with the highest fitness will be retained in each selection , This can optimize the population quality , Speed up convergence .

For cross steps , Single point crossing mode is adopted , The crossover operator is set to 0.75, Crossover is for individuals , If the probability of random generation is less than 0.75, Then the individual can participate in the crossover , Crossover with another individual at random , Generation of new offspring .

For mutation step , Single point mutation was used , Mutation operator set to 0.01, Variation is for each gene on the chromosome , If the probability of random generation is less than 0.01, The mutation of this locus occurred ,0 and 1 exchange .

(4) Termination conditions

First of all, the upper and lower bounds of an evolutionary algebra are set , Most iterations 1000 generation , minimum 100 generation . in addition , If it lasts for many generations, the improvement of the optimal value is very small , It will be terminated early , A balance between execution time and accuracy is achieved . here , If it continues 200 The improvement range of the optimal value is not more than 1e-6(10^-6) It will be terminated in advance .

use python Language for simulation , Solve the optimal solution of each optimization function and the corresponding decision variable value , The relationship between evolutionary algebra and optimal value is visualized . The precision of decision variable is 5, The target variable precision is 6.

<> three , Simulation results and analysis

(1) function f1 simulation result

function f 1 ( x , y ) = x 2 + y 2 f _ { 1 } ( x , y ) = x ^ { 2 } + y ^ { 2 } f1​(x
,y)=x2+y2, x , y ∈ [ − 5.12 , 5.12 ] x , y \in [ - 5.12,5.12 ] x,y∈[−5.12,5.12]
It's iterative 307 Convergence after generation , chart 3.1 Show the results after iteration , stay x=-0.00049,y=0.00285 The optimal value is obtained 8e-06, Basic convergence to exact solution 0. Other functions are the same as , Other functions are not explained , Only show the results .

chart 3.1 f1 Solution result

chart 3.2 Functions are visualized f1 and x, y The relationship between , The red dot indicates that the function converges to the minimum . Other functions are the same as .

chart 3.2 f1 visualization

chart 3.3 With the increase of evolution algebra ,“ Contemporary optimal value ”( Green curve ) and “ Best so far ”( Red curve ) Changes in the market , It can be seen that , The red curve is always below the green , It shows that the optimal value will not get worse , Although the contemporary optimal value fluctuates , But on the whole , It's getting better, too . also , Every 50 The best value so far has been marked ( The specific values are shown in the table 3.1), It can be seen that , With the increase of algebra , The improvement of the optimal value will not be obvious , At the same time, the function also converges . Other functions are the same as .

chart 3.3 f1 Evolutionary algebra - Optimal value change chart

surface 3.1 f1 Evolutionary algebra - Optimal value change table

Evolutionary algebra 151101151201251301
optimal value 0.2470350.0017251.2e-058e-068e-068e-068e-06
(2) function f2 simulation result

function f 2 ( x , y ) = 100 ( y − x 2 ) 2 + ( x − 1 ) 2 f _ { 2 } ( x , y ) = 100
\left( y - x ^ { 2 } \right) ^ { 2 } + ( x - 1 ) ^ { 2 }f2​(x,y)=100(y−x2)2+(x−1
)2, x , y ∈ [ − 2.048 , 2.048 ] x , y \in [ - 2.048,2.048 ] x,y∈[−2.048,2.048]

chart 3.4 f2 Solution result

chart 3.5 f2 visualization

chart 3.6 f2 Evolutionary algebra - Optimal value change chart

surface 3.2 f2 Evolutionary algebra - Optimal value change table

Evolutionary algebra 151101151201
optimal value 1.4182740.006480.006480.005180.002211
Evolutionary algebra 251301351401451
optimal value 0.0008680.000850.000850.000850.00085
(3) function f3 simulation result

function f 3 ( x , y ) = ( x 2 + y − 11 ) 2 + ( x + y 2 − 7 ) 2 f _ { 3 } ( x , y )
= \left( x ^ { 2 } + y - 11 \right) ^ { 2 } + \left( x + y ^ { 2 } - 7 \right)
^ { 2 }f3​(x,y)=(x2+y−11)2+(x+y2−7)2, x , y ∈ [ − 6 , 6 ] x , y \in [ - 6,6 ] x,
y∈[−6,6]

chart 3.7 f3 Solution result

chart 3.8 f3 visualization

chart 3.9 f3 Evolutionary algebra - Optimal value change chart

surface 3.3 f3 Evolutionary algebra - Optimal value change table

Evolutionary algebra 151101151201251301
optimal value 37.360070.1777180.1777180.068620.0214680.0018390.001839
(4) function f4 simulation result

function f 4 ( x , y ) = 4 x 2 − 2.1 x 4 + x 6 / 3 + x y + 4 y 4 − 4 y 2 f _ { 4 }
( x , y ) =4 x ^ 2 - 2.1 x ^ 4 + x ^6 / 3 + xy + 4 y ^4 - 4 y^ 2f4​(x,y)=4x2−2.1
x4+x6/3+xy+4y4−4y2, x , y ∈ [ − 5 , 5 ] x , y \in [ - 5,5 ] x,y∈[−5,5]

chart 3.10 f4 Solution result

chart 3.11 f4 visualization

chart 3.12 f4 Evolutionary algebra - Optimal value change chart

surface 3.4 f4 Evolutionary algebra - Optimal value change table

Evolutionary algebra 151101151201251
optimal value -0.836399-1.029728-1.030549-1.030549-1.030549-1.031549
Evolutionary algebra 301351401451501551
optimal value -1.031576-1.031617-1.031625-1.031625-1.031625-1.031625
(5) function f5 simulation result

function f 5 ( x , y ) = x 2 + 2 y 2 − 0.3 cos ⁡ 3 π x cos ⁡ 4 π y + 0.3 f _ { 5 }
( x , y ) = x ^ { 2 } + 2 y ^ { 2 } - 0.3 \cos 3 \pi x \cos 4 \pi y + 0.3f5​(x,y
)=x2+2y2−0.3cos3πxcos4πy+0.3, x , y ∈ [ − 1 , 1 ] x , y \in [ - 1,1 ] x,y∈[−1,1]

chart 3.13 f5 Solution result

chart 3.14 f5 visualization

chart 3.15 f5 Evolutionary algebra - Optimal value change chart

surface 3.5 f5 Evolutionary algebra - Optimal value change table

Evolutionary algebra 151101151201251
optimal value 0.2613950.0230160.0051060.0002060.00.0

Technology