GalacticOptim.jl

Build Status

GalacticOptim.jl is a package with a scope that is beyond your normal global optimization package. GalacticOptim.jl seeks to bring together all of the optimization packages it can find, local and global, into one unified Julia interface. This means, you learn one package and you learn them all! GalacticOptim.jl adds a few high-level features, such as integrating with automatic differentiation, to make its usage fairly simple for most cases, while allowing all of the options in a single unified interface.

Note: This package is still in development. The README is currently both an active documentation and a development roadmap.

Installation

Assuming that you already have Julia correctly installed, it suffices to import GalacticOptim.jl in the standard way:

import Pkg; Pkg.add("GalacticOptim")

The packages relevant to the basic functionality of GalacticOptim.jl will be imported accordingly and, in most cases, you do not have to worry about the manual installation of dependencies.

Examples

using GalacticOptim, Optim
 rosenbrock(x,p) =  (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
 x0 = zeros(2)
 p  = [1.0,100.0]

 prob = OptimizationProblem(rosenbrock,x0,p)
 sol = solve(prob,NelderMead())


 using BlackBoxOptim
 prob = OptimizationProblem(rosenbrock, x0, p, lb = [-1.0,-1.0], ub = [1.0,1.0])
 sol = solve(prob,BBO())

Note that in order to use BlackBoxOptim.jl, you will have to import the package manually if you don't have it (however, it is not necessary with Optim.jl).

A sample output of the first optimization task (with the NelderMead() algorithm) is given below:

* Status: success

* Candidate solution
   Final objective value:     3.525527e-09

* Found with
   Algorithm:     Nelder-Mead

* Convergence measures
   (Σ(yᵢ-ȳ)²)/n  1.0e-08

* Work counters
   Seconds run:   0  (vs limit Inf)
   Iterations:    60
   f(x) calls:    118

We can also explore other methods in a similar way:

f = OptimizationFunction(rosenbrock, GalacticOptim.AutoForwardDiff())
 prob = OptimizationProblem(f, x0, p)
 sol = solve(prob,BFGS())

For instance, the above optimization task may produce the following output:

* Status: success

* Candidate solution
   Final objective value:     7.645684e-21

* Found with
   Algorithm:     BFGS

* Convergence measures
   |x - x'|               = 3.48e-07  0.0e+00
   |x - x'|/|x'|          = 3.48e-07  0.0e+00
   |f(x) - f(x')|         = 6.91e-14  0.0e+00
   |f(x) - f(x')|/|f(x')| = 9.03e+06  0.0e+00
   |g(x)|                 = 2.32e-09  1.0e-08

* Work counters
   Seconds run:   0  (vs limit Inf)
   Iterations:    16
   f(x) calls:    53
   ∇f(x) calls:   53
prob = OptimizationProblem(f, x0, p, lb = [-1.0,-1.0], ub = [1.0,1.0])
 sol = solve(prob, Fminbox(GradientDescent()))

The examples clearly demonstrate that GalacticOptim.jl provides an intuitive way of specifying optimization tasks and offers a relatively easy access to a wide range of optimization algorithms.

Automatic Differentiation Choices

While one can fully define all of the derivative functions associated with nonlinear constrained optimization directly, in many cases it's easiest to just rely on automatic differentiation to derive those functions. In GalacticOptim.jl, you can provide as few functions as you want, or give a differentiation library choice.

  • AutoForwardDiff()
  • AutoReverseDiff(compile=false)
  • AutoTracker()
  • AutoZygote()
  • AutoFiniteDiff()
  • AutoModelingToolkit()

API Documentation

OptimizationFunction(f, AutoForwardDiff();
                     grad = nothing,
                     hes = nothing,
                     hv = nothing,
                     chunksize = 1)
OptimizationProblem(f, x, p = DiffEqBase.NullParameters(),;
                    lb = nothing,
                    ub = nothing)
solve(prob,alg;kwargs...)

Keyword arguments:

  • maxiters (the maximum number of iterations)
  • abstol (absolute tolerance)
  • reltol (relative tolerance)

Output Struct: