Algorithms
List of implemented metaheuristics.
Evolutionary Centers Algorithm
Metaheuristics.ECA
— TypeECA(;
η_max = 2.0,
K = 7,
N = 0,
N_init = N,
p_exploit = 0.95,
p_bin = 0.02,
p_cr = Float64[],
adaptive = false,
resize_population = false,
information = Information(),
options = Options()
)
Parameters for the metaheuristic ECA: step-size η_max
,K
is number of vectors to generate the center of mass, N
is the population size.
Example
julia> f(x) = sum(x.^2)
f (generic function with 1 method)
julia> optimize(f, [-1 -1 -1; 1 1 1.0], ECA())
+=========== RESULT ==========+
| Iter.: 1021
| f(x) = 1.68681e-163
| solution.x = [2.5517634463667404e-82, -2.9182760041942484e-82, -1.3565584801935802e-82]
| f calls: 21454
| Total time: 0.0894 s
+============================+
julia> optimize(f, [-1 -1 -1; 1 1 1.0], ECA(N = 10, η_max = 1.0, K = 3))
+=========== RESULT ==========+
| Iter.: 1506
| f(x) = 0.000172391
| solution.x = [-6.340714627875324e-5, -0.004127226953894587, 0.012464071313908906]
| f calls: 15069
| Total time: 0.0531 s
+============================+
Differential Evolution
Metaheuristics.DE
— TypeDE(;
N = 0,
F = 1.0,
CR = 0.9,
strategy = :rand1,
information = Information(),
options = Options()
)
Parameters for Differential Evolution (DE) algorithm: step-size F
,CR
controlls the binomial crossover, N
is the population size. The parameter trategy
is related to the variation operator (:rand1
, :rand2
, :best1
, :best2
, :randToBest1
).
Example
julia> f(x) = sum(x.^2)
f (generic function with 1 method)
julia> optimize(f, [-1 -1 -1; 1 1 1.0], DE())
+=========== RESULT ==========+
| Iter.: 437
| f(x) = 0
| solution.x = [0.0, 0.0, 0.0]
| f calls: 13134
| Total time: 0.3102 s
+============================+
julia> optimize(f, [-1 -1 -1; 1 1 1.0], DE(N=50, F=1.5, CR=0.8))
+=========== RESULT ==========+
| Iter.: 599
| f(x) = 9.02214e-25
| solution.x = [-4.1003250484858545e-13, -6.090890160928905e-13, -6.025762626763004e-13]
| f calls: 30000
| Total time: 0.0616 s
+============================+
Particle Swarm Optimization
Metaheuristics.PSO
— TypePSO(;
N = 0,
C1 = 2.0,
C2 = 2.0,
ω = 0.8,
information = Information(),
options = Options()
)
Parameters for Particle Swarm Optimization (PSO) algorithm: learning rates C1
and C2
, N
is the population size and ω
controlls the inertia weight.
Example
julia> f(x) = sum(x.^2)
f (generic function with 1 method)
julia> optimize(f, [-1 -1 -1; 1 1 1.0], PSO())
+=========== RESULT ==========+
| Iter.: 999
| f(x) = 3.23944e-48
| solution.x = [1.0698542573895642e-24, -1.4298101555926563e-24, -2.247029420442994e-25]
| f calls: 30000
| Total time: 0.4973 s
+============================+
julia> optimize(f, [-1 -1 -1; 1 1 1.0], PSO(N = 100, C1=1.5, C2=1.5, ω = 0.7))
+=========== RESULT ==========+
| Iter.: 299
| f(x) = 1.41505e-38
| solution.x = [2.161357427851024e-20, -1.1599444038307776e-19, 1.5122345732802047e-20]
| f calls: 30000
| Total time: 0.2128 s
+============================+
Artificial Bee Colony
Metaheuristics.ABC
— TypeABC(;
N = 50,
Ne = div(N+1, 2),
No = div(N+1, 2),
limit=10,
information = Information(),
options = Options()
)
ABC implements the original parameters for the Artificial Bee Colony Algorithm. N
is the population size, Ne
is the number of employees, No
is the number of outlookers bees. limit
is related to the times that a solution is visited.
Example
julia> f(x) = sum(x.^2)
f (generic function with 1 method)
julia> optimize(f, [-1 -1 -1; 1 1 1.0], ABC())
+=========== RESULT ==========+
| Iter.: 593
| f(x) = 3.54833e-25
| solution.x = [3.448700205761237e-13, 4.805851037329074e-13, 7.025504722610375e-14]
| f calls: 30019
| Total time: 0.2323 s
+============================+
julia> optimize(f, [-1 -1 -1; 1 1 1.0], ABC(N = 80, No = 20, Ne = 50, limit=5))
+=========== RESULT ==========+
| Iter.: 405
| f(x) = 2.24846e-07
| solution.x = [0.0002682351072804559, 0.00020460896416511776, 0.0003332131896109299]
| f calls: 30043
| Total time: 0.2652 s
+============================+
MOEA/D-DE
Metaheuristics.MOEAD_DE
— TypeMOEAD_DE(weights)
MOEAD_DE
implements the original version of MOEA/D-DE. It uses the contraint handling method based on the sum of violations (for constrained optimizaton): g(x, λ, z) = max(λ .* abs.(fx - z)) + sum(max.(0, gx)) + sum(abs.(hx))
To use MOEAD_DE, the output from the objective function should be a 3-touple (f::Vector, g::Vector, h::Vector)
, where f
contains the objective functions, g
and h
are the equality and inequality constraints respectively.
A feasible solution is such that g_i(x) ≤ 0 and h_j(x) = 0
.
Ref. Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II; Hui Li and Qingfu Zhang.
Example
Assume you want to solve the following optimizaton problem:
Minimize:
f(x) = (x_1, x_2)
subject to:
g(x) = x_1^2 + x_2^2 - 1 ≤ 0
x_1, x_2 ∈ [-1, 1]
A solution can be:
# Dimension
D = 2
# Objective function
f(x) = ( x, [sum(x.^2) - 1], [0.0] )
# bounds
bounds = [-1 -1;
1 1.0
]
nobjectives = 2
npartitions = 100
# reference points (Das and Dennis's method)
weights = gen_ref_dirs(nobjectives, npartitions)
# define the parameters
moead_de = MOEAD_DE(weights, options=Options(debug=false, iterations = 250))
# optimize
status_moead = optimize(f, bounds, moead_de)
# show results
display(status_moead)
Gravitational Search Algorithm
Metaheuristics.CGSA
— TypeCGSA(;
N::Int = 30,
chValueInitial::Real = 20,
chaosIndex::Real = 9,
ElitistCheck::Int = 1,
Rpower::Int = 1,
Rnorm::Int = 2,
wMax::Real = chValueInitial,
wMin::Real = 1e-10,
information = Information(),
options = Options()
)
CGSA is an extension of the GSA algorithm but with Chaotic gravitational constants for the gravitational search algorithm.
Ref. Chaotic gravitational constants for the gravitational search algorithm. Applied Soft Computing 53 (2017): 407-419.
Parameters:
- N: Population size
- chValueInitial: Initial value for the chaos value
- chaosIndex: Integer 1 ≤ chaosIndex ≤ 10 is the function that model the chaos
- Rpower: power related to the distance norm(x)^Rpower
- Rnorm: is the value as in norm(x, Rnorm)
Example
julia> f(x) = sum(x.^2)
f (generic function with 1 method)
julia> optimize(f, [-1 -1 -1; 1 1 1.0], CGSA())
+=========== RESULT ==========+
| Iter.: 499
| f(x) = 0.000235956
| solution.x = [0.0028549782101697785, -0.0031385153631797724, 0.014763299731686608]
| f calls: 15000
| Total time: 0.1003 s
+============================+
julia> optimize(f, [-1 -1 -1; 1 1 1.0], CGSA(N = 80, chaosIndex = 1))
+=========== RESULT ==========+
| Iter.: 499
| f(x) = 0.000102054
| solution.x = [0.00559987302269564, 0.00017535321765604905, 0.008406213942044265]
| f calls: 40000
| Total time: 0.5461 s
+============================+
Simulated Annealing
Metaheuristics.SA
— Type SA(;
x_initial::Vector = zeros(0),
N::Int = 500,
tol_fun::Real= 1e-4,
information = Information(),
options = Options()
)
Parameters for the method of Simulated Annealing (Kirkpatrick et al., 1983).
Parameters:
- x_intial: Inital solution. If empty, then SA will generate a random one within the bounds.
- N: The number of test points per iteration.
- tol_fun: tolerance value for the Metropolis condition to accept or reject the test point as current point.
Example
julia> f(x) = sum(x.^2)
f (generic function with 1 method)
julia> optimize(f, [-1 -1 -1; 1 1 1.0], SA())
+=========== RESULT ==========+
| Iter.: 60
| f(x) = 2.84574e-73
| solution.x = [-5.307880224731971e-37, -5.183298967486749e-38, 1.2301984439451926e-38]
| f calls: 29502
| Total time: 0.0465 s
+============================+
julia> optimize(f, [-1 -1 -1; 1 1 1.0], SA(N = 100, x_initial = [1, 0.5, -1]))
+=========== RESULT ==========+
| Iter.: 300
| f(x) = 1.29349e-70
| solution.x = [-7.62307964668667e-36, 8.432089040013441e-36, -3.7077496015659554e-37]
| f calls: 29902
| Total time: 0.0466 s
+============================+
Whale Optimization Algorithm
Metaheuristics.WOA
— TypeWOA(;N = 30, information = Information(), options = Options())
Parameters for the Whale Optimization Algorithm. N
is the population size (number of whales).
Example
julia> f(x) = sum(x.^2)
f (generic function with 1 method)
julia> optimize(f, [-1 -1 -1; 1 1 1.0], WOA())
+=========== RESULT ==========+
| Iter.: 499
| f(x) = 4.56174e-104
| solution.x = [-1.04059445339676e-52, 1.7743142412652892e-52, 5.750781222647098e-53]
| f calls: 15000
| Total time: 0.0844 s
+============================+
julia> optimize(f, [-1 -1 -1; 1 1 1.0], WOA(N = 100))
+=========== RESULT ==========+
| Iter.: 499
| f(x) = 1.29795e-147
| solution.x = [1.306372696781744e-74, -3.017649118559932e-75, 3.3439182063846375e-74]
| f calls: 50000
| Total time: 0.1894 s
+============================+
NSGA-II
Metaheuristics.NSGA2
— Typefunction NSGA2(;
N = 100,
η_cr = 20,
p_cr = 0.9,
η_m = 20,
p_m = 1.0 / D,
ε = eps(),
information = Information(),
options = Options(),
)
Parameters for the metaheuristic NSGA-II.
Parameters:
N
Population size.η_cr
η for the crossover.p_cr
Crossover probability.η_m
η for the mutation operator.p_m
Mutation probability (1/D for D-dimensional problem by default).
To use NSGA2, the output from the objective function should be a 3-touple (f::Vector, g::Vector, h::Vector)
, where f
contains the objective functions, g
and h
are inequality, equality constraints respectively.
A feasible solution is such that g_i(x) ≤ 0 and h_j(x) = 0
.
using Metaheuristics
# Dimension
D = 2
# Objective function
f(x) = ( x, [sum(x.^2) - 1], [0.0] )
# bounds
bounds = [-1 -1;
1 1.0
]
# define the parameters (use `NSGA2()` for using default parameters)
nsga2 = NSGA2(N = 100, p_cr = 0.85)
# optimize
status = optimize(f, bounds, nsga2)
# show results
display(status)
NSGA-III
Metaheuristics.NSGA3
— Typefunction NSGA3(;
N = 100,
η_cr = 20,
p_cr = 0.9,
η_m = 20,
p_m = 1.0 / D,
ε = eps(),
information = Information(),
options = Options(),
)
Parameters for the metaheuristic NSGA-III.
Parameters:
N
Population size.η_cr
η for the crossover.p_cr
Crossover probability.η_m
η for the mutation operator.p_m
Mutation probability (1/D for D-dimensional problem by default).
To use NSGA3, the output from the objective function should be a 3-touple (f::Vector, g::Vector, h::Vector)
, where f
contains the objective functions, g
and h
are inequality, equality constraints respectively.
A feasible solution is such that g_i(x) ≤ 0 and h_j(x) = 0
.
using Metaheuristics
# Objective function, bounds, and the True Pareto front
f, bounds, pf = Metaheuristics.TestProblems.get_problem(:DTLZ2)
# define the parameters (use `NSGA3()` for using default parameters)
nsga3 = NSGA3(p_cr = 0.9)
# optimize
status = optimize(f, bounds, nsga3)
# show results
display(status)