GPMaxlik.gnllMethod

Negative log likelihood and its gradient and expected Fisher matrix. Specify what you want with the kwargs nll=true, grad=true, and fish=true.

Maybe the only interesting thing here is the keyword arg saa. If you provide a matrix whose columns have the appropriate length, they'll be used as inputs for a symmetrized stochastic estimator for the gradient and expected Fisher, so long as they stay above SETTINGS.INFNORM_TOL in infinity norm. After that, the setting is flipped and future computations are exact. This can really provide a lot of speedup, and the symmetrized stochastic estimators are quite good away from the MLE. So when the gradient is large, for example, you really will lose very little efficiency by using the stochastic derivatives.

Finally, the operations of the large covariance matrices are very compartmentalized. You see the optional kwargs of K, and Kdv, which optionally allow you to provide a black-box struct that implements just a few methods for your covariance matrix and its derivative matrices. The file "./src/covmatrix.jl" is the default implementation of a reasonably careful dense and exact matrix. But if you wanted to specify a sparse precision matrix or something more exotic, you could make your own struct, implement the few required methods, and provide those in the kwargs below. The Kd2 object is only required if you ask for exact Fisher matrices.

Finally, store is optionally an object to store some internals of stochastic derivatives between function calls. It's really only useful for when you're interfacing with an optimization software that requires different gradient and hessian functions, and using a store can completely mitigate the extra cost of calling those functions separately. See the example files for a full worked example and demonstration.

The output is a named tuple of the things that you asked for, with the names being the same as the keyword arguments.

Signature:

gnll(pts, data, covfn, derivs_covfn, params, nll, grad, fish,
     saa=nothing, profile=false, vrb=false,
     K=nothing, Kdv=DerivativeCovarianceMatrix[], store=nothing)

See the example files for demonstrations of using this to interface with various optimization software suites.

GPMaxlik.trustregionMethod

A simple trust region method with a few twists to try and be maximally flexible. The required functions are
(1) objective
(2) objective+gradient [+hessian or third return value of nothing for BFGS]
(3) initial input vector
Along with standard keyword args for a trust region method, you can provide the arg iter_funs, which is a collection of functions that get run for each iteration. They have a required signature; see GPMaxlik.convg_info for an example. But with this you can add your own stopping conditions, different print output to suit your needs, and so on. You can provide arbitrary keyword args to those functions via the iter_funs_kwargs argument, which takes an iterable list of pairs.